One of the more straightforward analyses we often do at Analytical Ones is to compare the results of a direct mail test to identify whether the differences are statistically significant.
Though this is a straightforward analysis, there are a lot to these tests. So, let me try to clarify a couple of things.
Generally, we are testing to determine whether the differences in either the response rates or average gift sizes are statically significant. Each of these tests are very different and require different data sets and tests.
Let’s take the easy one first, testing the statistical difference in response rates. Because there are just two outcomes for response – yes, the donor responded, denoted by a value of “1”, or no, the donor did not respond, denoted by a value of “0” – you can use summary statistics (averages) for this test. We recommend using a Z test.
All you need for testing response rates are:
1. Number of mailed in the control
2. Number of responses in the control
3. Number of mailed in the test
4. Number of responses in the test
You also need to know the level of test confidence you are comfortable. Confidence levels are standards. Let me try to explain this.
Typically, in Z tests, analysts use one of three confidence levels:
If your results are significant at the 99% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 99 in 100 times. That’s a very high level of confidence. Conversely, if your results are significant at the 90% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 90 in 100 times. A strong level of confidence, but not as high of a standard as 99%.
The level of confidence is indirectly proportional to the level of risk in making a change. So, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. So, if the test package beats the control with a 90% level of confidence, the recommendation would be to roll out with the new test package.
In my next blog, I’ll discuss testing average gift size differences.
By the way, here is a link to an online tool you can use to test response rate differences.