The Last Word in Measuring Engagement

The Last Word in Measuring Engagement

If you could summarize the consultant chatter over the past decade into one phrase, I would nominate “donor engagement” as the winning phrase. But what does that mean exactly? As a donor behavior scientist, I am a firm believer in what Peter Drucker said: “If you can’t measure it, you can’t manage it.” So, here at Analytical Ones, we’ve developed a new Engagement Score (ES). We think it’s the last word in measuring engagement. Over the past decade, we’ve been experimenting with many different models to best measure engagement. The problem is they tend to get far too complicated. And sometimes a simple metric is better than a perfect metric. Last week, my business partner stumbled upon this reddit link and shared it with me: https://i.redd.it/fy0zvuob8tfz.jpg Basically, this link shows the different ways Starbucks calculates Long-term Value (LTV). To summarize, their “average” 20-year customer LTV is $14,100. Think about that. The average beverage costs between $3-$4 at Starbucks. Talk about engagement. Now, at Analytical Ones, we think a 20-year LTV is far too long of a period. We use a 5-year LTV instead. Starbucks 5-year LTV would be $3,525. Our new ES is based on two assumptions: 1) That engagement is best measured by LTV; and, 2) Starbucks is the Gold Standard of engagement. OK, this may not be a perfect model because it doesn’t consider volunteering and other measures. But we think any of our model’s deficiencies are mitigated by the beauty of its simplicity. You can calculate your ES by dividing your organization’s 5-year donor LTV by 3,525, then multiply by 100. Or, by equation: ((Nonprofit LTV/Starbucks...
Average Gift Size Testing and Statistical Significance

Average Gift Size Testing and Statistical Significance

Last time, I discussed testing the statistical difference in response rates. In this blog, we will tackle average gift size testing. I’ll start with the problem we always run into: someone will give us the average gift sizes of a test and control, and ask if the difference is statistically significant. The problem with that is you can’t use summary statistics for testing average gift sizes. Here’s why: average gifts are susceptible to skewing. For example, you could have the control have an average gift size of $75 and the test at $25 and the two metrics still may not be statistically significant because the control may have had a single $5,000 gift that is skewing the results. Therefore, in order to properly conduct statistical testing on averages, you must have the whole distribution of gifts. Each and every gift. There are a number of different tests one can use to test significance. One of the most common is called a T-test. In SPSS, the stats software we at Analytical Ones use, this test is easy to run and analyze. Just like in response rate testing, you also need to know the level of test confidence you are comfortable with. The level of confidence is indirectly proportional to the level of risk in making a change. Again, for direct marketing tests, we recommend using a 90% level of confidence. No one is going to die if we make a bad decision – unlike pharmaceutical testing. The trick with all this is the need to look at BOTH average gift size AND response rate when evaluating the “winning” package. Often...
Response Rate Testing and Statistical Significance

Response Rate Testing and Statistical Significance

One of the more straightforward analyses we often do at Analytical Ones is to compare the results of a direct mail test to identify whether the differences are statistically significant. Though this is a straightforward analysis, there are a lot to these tests. So, let me try to clarify a couple of things. Generally, we are testing to determine whether the differences in either the response rates or average gift sizes are statically significant. Each of these tests are very different and require different data sets and tests. Let’s take the easy one first, testing the statistical difference in response rates. Because there are just two outcomes for response – yes, the donor responded, denoted by a value of “1”, or no, the donor did not respond, denoted by a value of “0” – you can use summary statistics (averages) for this test. We recommend using a Z test. All you need for testing response rates are: 1. Number of mailed in the control 2. Number of responses in the control 3. Number of mailed in the test 4. Number of responses in the test You also need to know the level of test confidence you are comfortable. Confidence levels are standards. Let me try to explain this. Typically, in Z tests, analysts use one of three confidence levels: 1. 99% 2. 95% 3. 90% If your results are significant at the 99% confidence level, it means that if you repeated the test 100 times, the result will be statistically significant 99 in 100 times. That’s a very high level of confidence. Conversely, if your results are significant at...
The Power of Anonymity

The Power of Anonymity

I have a daughter graduating from high school this month. Her class is heading for a mission trip to serve an orphanage in the Dominican Republic for a week. For her to go on this trip, she had to raise her own support of $1,500, or pay for it out of her savings. Honestly, I was hoping that she could raise half of the money, and then we’d kick in the other half. So, she started her own GoFundMe campaign, and to my utter shock, in the first day she hit her fundraising goal. Apparently, there were a couple of anonymous donors who made some big gifts. I don’t know who these people are, but I am grateful to them. And because I don’t know who these people are (they might even be reading this blog) I am motivated to be grateful to everyone I talk to. Because, I just don’t know. That got me thinking. In my line of work, we go to great lengths to segment donors based on their past or potential giving. And while I have oodles of data that show this is an effective utilitarian approach, I wonder if this approach does cause us to curtail our gratitude? I take these kinds of questions seriously. It’s one of the reasons I love fundraising. We are always struggling to optimize fundraising with a balance of art and science. And while we at Analytical Ones always think your decision should be anchored in the data, they must also be anchored in...
Net Dollars Not Donors Part 2

Net Dollars Not Donors Part 2

This may sound heretical, but counting donors isn’t as important as counting dollars. Here’s why: donors are not of equal value. For far too long in fundraising, there has been this assumption that “more donors are better.” This would be true if (and only if) all donors have equal value. But they don’t. I think I know we got here. Once upon a time an analyst figured a donor’s long-term value. Let’s say that value was $225. Then the DD thought, “Hey, all I need is more new donors. The best way to get more new donors is to lower my acquisition ask string.” Then 5-years later, the analyst recalculates the LTV and finds it’s only $100. The DD is angry with the analyst. They just spent $100 acquiring these donors. So the net value after 5-years is $0. “You said each donor was worth $225!” You know how the what happened. LTV Is tied to first gift amount. Lower the ask, you lower the LTV. Yet still, I talk with Development Directors who think 1,000 donors with an AGS between $10-24.99 are better than 10 donors with an AGS of $25-49.99. But when you take both acquisition and cultivation costs into the equation, as the table below shows, that’s just not true. Acquiring 1,000 donors with an AGS between $10-24.99 will yield a negative net revenue of $24,000 after 5-years. Whereas just 10 donors with an AGS of $25-49.99 will yield $490 in positive net revenue. If you want to change the direction of your fundraising programs, you first must change what you are...