Average Gift Size Testing and Statistical Significance

Average Gift Size Testing and Statistical Significance

Last time, I discussed testing the statistical difference in response rates. In this blog, we will tackle average gift size testing. I’ll start with the problem we always run into: someone will give us the average gift sizes of a test and control, and ask if the difference is statistically significant. The problem with that is you can’t use summary statistics for testing average gift sizes. Here’s why: average gifts are susceptible to skewing. For example, you could have the control have an average gift size of $75 and the test at $25 and the two metrics still may not be statistically significant because the control may have had a single $5,000 gift that is skewing the results. Therefore, in order to properly conduct statistical testing on averages, you must have the whole distribution of gifts. Each and every gift. There are a number of different tests one can use to test significance. One of the most common is called a T-test. In SPSS, the stats software we at Analytical Ones use, this test is easy to run and analyze. Just like in response rate testing, you also need to know the level of test confidence you are comfortable with. The level of confidence is indirectly proportional to the level of risk in making a change. Again, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. The trick with all this is the need to look at BOTH average gift size AND response rate when evaluating the “winning” package. Often...
Response Rate Testing and Statistical Significance

Response Rate Testing and Statistical Significance

One of the more straightforward analyses we often do at Analytical Ones is to compare the results of a direct mail test to identify whether the differences are statistically significant. Though this is a straightforward analysis, there are a lot to these tests. So, let me try to clarify a couple of things. Generally, we are testing to determine whether the differences in either the response rates or average gift sizes are statically significant. Each of these tests are very different and require different data sets and tests. Let’s take the easy one first, testing the statistical difference in response rates. Because there are just two outcomes for response – yes, the donor responded, denoted by a value of “1”, or no, the donor did not respond, denoted by a value of “0” – you can use summary statistics (averages) for this test. We recommend using a Z test. All you need for testing response rates are: 1. Number of mailed in the control 2. Number of responses in the control 3. Number of mailed in the test 4. Number of responses in the test You also need to know the level of test confidence you are comfortable. Confidence levels are standards. Let me try to explain this. Typically, in Z tests, analysts use one of three confidence levels: 1. 99% 2. 95% 3. 90% If your results are significant at the 99% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 99 in 100 times. That’s a very high level of confidence. Conversely, if your results are significant at...