Learn more about analytics and research best practices, as well as real world examples and solutions for nonprofits.


Jul 6, 2017

Response Rate Testing and Statistical Significance

One of the more straightforward analyses we often do at Analytical Ones is to compare the results of a direct mail test to identify whether the differences are statistically significant.

Though this is a straightforward analysis, there are a lot to these tests. So, let me try to clarify a couple of things.

Generally, we are testing to determine whether the differences in either the response rates or average gift sizes are statically significant. Each of these tests are very different and require different data sets and tests.

Let’s take the easy one first, testing the statistical difference in response rates. Because there are just two outcomes for response – yes, the donor responded, denoted by a value of “1”, or no, the donor did not respond, denoted by a value of “0” – you can use summary statistics (averages) for this test. We recommend using a Z test.

All you need for testing response rates are:

1. Number of mailed in the control
2. Number of responses in the control
3. Number of mailed in the test
4. Number of responses in the test

You also need to know the level of test confidence you are comfortable. Confidence levels are standards. Let me try to explain this.

Typically, in Z tests, analysts use one of three confidence levels:

1. 99%
2. 95%
3. 90%

If your results are significant at the 99% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 99 in 100 times. That’s a very high level of confidence. Conversely, if your results are significant at the 90% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 90 in 100 times. A strong level of confidence, but not as high of a standard as 99%.

The level of confidence is indirectly proportional to the level of risk in making a change. So, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. So, if the test package beats the control with a 90% level of confidence, the recommendation would be to roll out with the new test package.

In my next blog, I’ll discuss testing average gift size differences.

By the way, here is a link to an online tool you can use to test response rate differences.

Top 5 Mistakes: #5 CRM Expectations

I have been analyzing the fundraising business for nearly three decades and over the years I keep seeing nonprofit organizations making the same mistakes. These errors hold organizations back. If you are new to fundraising, please commit yourself to avoiding these...

read more

One More Gift from Sustainers

I think we’d all agree that sustainers are great. Having a group of donors that have committed to giving every month is like money in the bank. And, not surprisingly, a lot of organizations are careful to limit additional appeals to this group so not to alienate any...

read more

Early Returns

We are knee-deep in analyzing last fall’s results. While there is of course some variation in results by organization, the general narrative is that last fall wasn’t great . . . but it wasn’t awful either. It was just OK. What we are seeing is that overall revenue was...

read more


© 2020 Analytical Ones