Unsustainable Trends III: The Pains of Replenishment

Unsustainable Trends III: The Pains of Replenishment

A few years ago, I had a favorite “go-to” statistic when speaking to clients about acquisition. Savvy fundraisers already know that the full ROI on acquiring new donors cannot be measured by a single campaign or their first year of giving. New donors are the gift that keeps on giving, or, at least one would hope. With acquisition campaign ROI falling to break-even levels or worse, many non-profits have become hesitant to spend much on acquiring new donors. That’s when I would pull out of my favorite stat: “A third of your revenue comes from donors acquired in the last 3 years.” New donors aren’t just a nice bonus to your donor file. They are essential. Whoever thought they would miss the good old of days of 2009? Well, in this instance I do, because my favorite statistic is becoming the victim of our Unsustainable Trends. Follow the trend in the chart below. It was true around 5 years ago that a third of revenue was from recently acquired donors. But, overall decreasing acquisition numbers have a domino effect on the full file. We’re now down to 1 out of 5 dollars. That is a large segment of the typical donor file that is shrinking. Less acquired today = Less to cultivate...
Unsustainable Trends Part II: The Aging File

Unsustainable Trends Part II: The Aging File

My business partner Bill’s blog Unsustainable Trends seems to have struck a chord with many of our friends in development roles. To continue this conversation, let’s take a look at one of the reasons your donor file might not be generating as much revenue as in the past. Your donors are over the hill, literally. The chart below shows an example from a typical client database. On the vertical axis, I have plotted each donor’s 5-year value from 2008-2012 (0-12 month active donors, scored at start of 2008). On the horizontal axis, we have the donors’ ages on January 1st, 2008. Here’s that hill we are talking about. It doesn’t take more than a second to notice that there is a distinct downward trend in LTV starting just after the age of 60. Here’s where you should get really freaked out. The chart below shows the average age of a donor on that same file by lifecycle and year.  Before running this data, I expected to see Multi-Years aging. This was no surprise and I thought perhaps that was the extent of our problem. However, what scares me even more is that there is no help on the way. The rising red line for new donor age is the most alarming piece of this study. In an ideal world we’d see the red line moving in the opposite direction, meaning that we have found a way to tap into a younger donor base. At the minimum, we’d want to see this line remain flat, meaning that we are catching people at a particular stage in life as they reach it....

Frank Sinatra’s Secret Life as a Market Researcher

One of the first decisions made when facing a new research question is whether quantitative or qualitative methods will collect the most useful data. When time and resources allow, I agree with the opinion of Ol’ Blue Eyes. “This I tell ya, brother, you can’t have one without the other.” For a thorough review the differences between Qual and Quant, click here. Quantitative can fall short because it often does not uncover the “why”. Qualitative often answers the “why”, but cannot address the “how much”. When at all possible, a dual method approach leads to the most reliable conclusions. “Try, try, try to separate them, it’s an illusion.” I just finished a dual method project for a Christian missionary organization. In an initial survey, one question was designed to gauge the respondents’ preparedness to complete a particular task following a seminar. The purpose was to determine if the seminar was addressing this topic in a way that left attendees feeling prepared. The question asked for their level of agreement with the statement “I fully understand the tasks remaining to (complete this mission).” When reviewing the quantitative, the number who “somewhat agree” outnumbered those who “strongly agree” which would lead to the conclusion that there is much room for improvement in that area. Then came the qualitative research… Following the survey we conducted individual depth interviews with a sample of respondents. As part of the interview, we reviewed their response to the question above. I began to notice something interesting. Many of the individuals who only “somewhat agreed” that they “understood the task” actually were some of the most prepared....
What Friendships Tell Us About Brand Power

What Friendships Tell Us About Brand Power

The number one factor for a donor deciding where to give a gift is their awareness of an organization. Our research continually shows that while stewardship, trust and effectiveness are important concepts in donor perceptions, they are all moot points if donors have insufficient knowledge of an organization. In market research, we measure brand awareness in two different ways: Brand recognition or Aided Awareness:  When shown a list of brands including yours, does the respondent recognize your brand? Brand recall or Top of Mind Awareness:  When asked about a general category, does the respondent name your brand without help? Both of these measures are useful, but which one is more important? I’ll make an argument for one with a personal story. This past Friday marked one year since my father passed away from cancer. In our era of hyper-sharing through social media, I had considered mentioning something about it on Facebook. No doubt, there would have been many “likes” and kind comments from a range of friends. And, they would all be sincere. But, for some reason I just felt like keeping it private this time. That evening, I did unexpectedly receive a call from my best friend. We don’t talk as much as we should, but every time we do I know it was time well spent. We have the type of bond that goes beyond Aided Awareness. He had remembered the date himself and was thinking about me. This meant a lot more than a hundred “likes.” Back to brands now… There’s certainly value in being recognized on a list, but your most loyal supporters and frequent...

Managing Surveys and Baseball Teams

As I sit here in my red shirt and Tribe cap, just hours before the American League Wild Card game, I can’t help it that my mind is on baseball. Last week following several late inning implosions from vilified Indian’s closer Chris Perez, I was compelled to quantify the fans’ response to the situation. I decided to launch an impromptu survey about how the Indians should proceed in dealing with their lack of a dependable closer. I didn’t want to pay for a large commercial sample and I wasn’t concerned over the scientific validity of my results, so I opted for a simple convenience sample. That is, I polled anyone who was available and willing with no attempt to create a representative sample. I asked a few short questions, but most importantly “Who should close games for the Indians in the 2013 playoffs?” I posted my survey to a couple Indians message boards and it also got picked up by a heavily followed Facebook page, “Tom Hamilton has the Best Home Run Call in Baseball.” And, by the way, he does. My survey ended up with over 500 responses. From an outside perspective, I could tell myself that my results would be very unscientific and therefore unreliable. These are very important concerns when choosing your survey methodology. But, as it happened, the Cleveland Plain Dealer ran almost the exact same poll that day. Newspaper polls also suffer the same audience biases and shortcomings of convenience samples. Two ‘bad’ surveys couldn’t possibly be reliable between each other right? Wouldn’t they both come back with garbage? Wrong. The results of my...