Learn more about analytics and research best practices, as well as real world examples and solutions for nonprofits.


Jul 6, 2017

Response Rate Testing and Statistical Significance

One of the more straightforward analyses we often do at Analytical Ones is to compare the results of a direct mail test to identify whether the differences are statistically significant.

Though this is a straightforward analysis, there are a lot to these tests. So, let me try to clarify a couple of things.

Generally, we are testing to determine whether the differences in either the response rates or average gift sizes are statically significant. Each of these tests are very different and require different data sets and tests.

Let’s take the easy one first, testing the statistical difference in response rates. Because there are just two outcomes for response – yes, the donor responded, denoted by a value of “1”, or no, the donor did not respond, denoted by a value of “0” – you can use summary statistics (averages) for this test. We recommend using a Z test.

All you need for testing response rates are:

1. Number of mailed in the control
2. Number of responses in the control
3. Number of mailed in the test
4. Number of responses in the test

You also need to know the level of test confidence you are comfortable. Confidence levels are standards. Let me try to explain this.

Typically, in Z tests, analysts use one of three confidence levels:

1. 99%
2. 95%
3. 90%

If your results are significant at the 99% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 99 in 100 times. That’s a very high level of confidence. Conversely, if your results are significant at the 90% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 90 in 100 times. A strong level of confidence, but not as high of a standard as 99%.

The level of confidence is indirectly proportional to the level of risk in making a change. So, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. So, if the test package beats the control with a 90% level of confidence, the recommendation would be to roll out with the new test package.

In my next blog, I’ll discuss testing average gift size differences.

By the way, here is a link to an online tool you can use to test response rate differences.

The Regression to the Mean has Begun

A year ago, all of us in the business of fundraising were nervous. Lockdowns were taking place across the world, there were shortages of toilet paper and no one was sure how donors would respond. No one could have predicted that donors would respond in a such an...

read more

December 31 Emails

This past year, we tracked email solicitations for both Giving Tuesday and December 31 (your can read about Giving Tuesday here: https://www.analyticalones.com/giving-spam-tuesday/). We wanted to compare these two critical days of email fundraising. Here are some...

read more

Giving (SPAM) Tuesday

This year, the whole Analytical Ones’ team tracked our the number of emails we received from organizations, how many we received, our relationship with the organization (active donor, lapsed donor or new donor acquisition), and, when we received the email. Here is...

read more


© 2021 Analytical Ones