Are My Response Rates Statistically Significant?

Are My Response Rates Statistically Significant?

I’m taking a different approach in today’s blog to discuss a problem I continue to run into when I am working with clients. And that is testing whether the response rates in a direct mail test are meaningful. Now response rates are relatively uncomplicated to test. Anyone can do it. That’s because there are only two possible outcomes: Response or nonresponse. Here’s an online testing tool for you to use. After you click on the link, you will need to enter your data. The “base sizes” are the number of pieces mailed in each of your tests. The “proportions” are the number of responses for each test divided by the number of mailed. You can choose your confidence level (we recommend 90%), and then click calculate. Voila! I suggest using a 90% confidence level because it is a high standard. That means if you have this difference in 100 tests, in 90 of those tests the difference will be real. The “Results” box will state either “Significant” or “NOT Significant.” You don’t need to understand statistics to use this tool! Another helpful metric to know is your average gift size. Now, testing average gift size is more work to calculate, because the values you are testing can range from $0.01 to $1 million or more. That means you can’t use summary statistics (averages) to calculate significance. Instead, you need the entire gift distribution. Plus, you will need a more robust software tool than Excel to do this. We use SPSS. We suggest that you talk with your analytics people to have them help you through testing average gift size...
Postage Going Up

Postage Going Up

On January 27, first class postage is going up to 55 cents, a 5-cent increase. This is the largest increase in almost 30-years. As this article states: “The Postal Service lost $3.9 billion in 2018, attributing the losses to drops in mail volume and the costs of pensions and health care. It marked the 12th year in a row the agency reported a loss despite growth in package shipping.” $3.9 billion. That’s a big number. To put this in context the USPS lost almost a half million dollars every hour, 24-hours a day. That’s just not sustainable. We have been fortunate in this country to have had such a great postal system. It has been fast, accurate and affordable. Nonprofits have been doubly blessed by the US Postal Services’ postal subsidy for nonprofit organizations. But that is changing before our eyes. Delivery of nonprofit mail is getting slower and slower and impossible to predict. Nonprofit mail is now delivered in batches. That means when your appeal does finally get delivered, it’s hitting your donors’ mailbox at the same time as every other nonprofit’s mail. This fall I received 8 appeals on the same day. So now not only are your appeals unpredictably slow on delivery, they are facing greater competition. I can’t help but think that the nonprofit postal subsidies will likely end soon. Here’s what I recommend while there is still time: Analyze the ROI of your donor segments this fall and test first class postage in top performing segments. With access to reliable nonprofit postage, first class postage is an extravagance. However, you are paying a lot...
Improving the Acquisition Experience

Improving the Acquisition Experience

Allegories for Unintegrated Direct Mail Acquisition Testing Activities: • Tailor fitting a funeral suit • Climbing a tall ladder on a building that’s on fire • Planting orange trees above the freeze line Don’t get me wrong. I love direct mail. But sometimes, context matters. Spending hours and hours coming up with new ideas on how to beat your direct mail acquisition control kit is probably not the best use of your time right now – especially if you aren’t integrating your direct mail with other media campaigns. Realistically, moving your direct mail acquisition response rate from 0.36% to 0.56% isn’t going to solve all your problems. New donor acquisition is a long-term problem, and you need an integrated solution. I suggest, this time, you stick with your tried and true acquisition kit. Reinvest the time and resources you’d otherwise spend in direct mail testing protocols into brainstorming ways to engage a new audience that’s completely different. I know from a career standpoint that seems bold. That everyone knows you can’t get fired for doing the “safe” thing. But as you can tell by looking at our stock market, there are no safe things right now. These new layers to an old problem require innovative solutions. So be bold. Test something that’s never been tried – and then tell us about...
My Fall Mailbox

My Fall Mailbox

Like many of you, my mailbox is stuffed with nonprofit appeals between October 1st and December 31st. Here is a recap of the direct mail I received this year, and the insights I noted: • I received an even 50 pieces of direct mail. Thirty-five pieces were appeals from 11 organizations I have previously supported, while 15 pieces were new donor acquisition kits. • Sadly, direct mail with nonprofit postage arrived all on the same day of the week. It appears that the Post Office holds all of your direct mail and then efficiently bulk delivers it. This is a killer for nonprofit organizations, as their appeal must compete with all the other messages delivered that day. It may be worth testing the use of first-class postage in fourth quarter acquisition next year, to limit direct competition in the same mailbox. • I received the most pieces from The Salvation Army (14) – though half of them were duplicates. I have supported The Army in two different Divisions, both of which sent me identical appeals. I contacted one Division about this issue last year, but the message must not have reached the right people. • I received 7 appeals from my local PBS station. • All the other organizations I have sent money to in the past sent me between 1 and 3 pieces of direct mail – probably reflecting my lapsed status . . . • World Vision sent me the only two catalogs I received. The sole difference between them was the cover. It appears we are in a down cycle for catalogs, as I used to...
A Fall of Concern

A Fall of Concern

Here we are again in the midst of the big fundraising season. As I type, nonprofit organizations’ direct mail packages are landing in our mailboxes – with dances of response rates dancing in their heads. This is always an anxiety filled time. Usually, election cycles don’t have a big impact on nonprofit giving. And though this is no usual year, a donor survey we conducted last week indicates that political giving is not siphoning money away from charities. Still, the divisions and heated national discourse we all have been experiencing this past year isn’t exactly creating the peace on earth and goodwill toward men moment we hope for in the giving season. We are not the most charitable environment. But the truth is that we’ve been here before. As I look back at my 25-years in the business, fundraising made it through the fall of the dot.com bust. We made through 9/11 and the anthrax scare. And we survived the Great Recession. It’s not to say we don’t all have some bumps and bruises from these events, but the generosity of the American people seems to have a way of shining through and saving the day. I’m optimistic that we will have a good fundraising season, and can’t wait to dig into all the donor databases next January to see just how good you all...
Average Gift Size Testing and Statistical Significance

Average Gift Size Testing and Statistical Significance

Last time, I discussed testing the statistical difference in response rates. In this blog, we will tackle average gift size testing. I’ll start with the problem we always run into: someone will give us the average gift sizes of a test and control, and ask if the difference is statistically significant. The problem with that is you can’t use summary statistics for testing average gift sizes. Here’s why: average gifts are susceptible to skewing. For example, you could have the control have an average gift size of $75 and the test at $25 and the two metrics still may not be statistically significant because the control may have had a single $5,000 gift that is skewing the results. Therefore, in order to properly conduct statistical testing on averages, you must have the whole distribution of gifts. Each and every gift. There are a number of different tests one can use to test significance. One of the most common is called a T-test. In SPSS, the stats software we at Analytical Ones use, this test is easy to run and analyze. Just like in response rate testing, you also need to know the level of test confidence you are comfortable with. The level of confidence is indirectly proportional to the level of risk in making a change. Again, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. The trick with all this is the need to look at BOTH average gift size AND response rate when evaluating the “winning” package. Often...