Online software such as Survey Monkey and Google Forms have made the power of survey research available to anyone with an internet connection rendering us a society of survey experts.
And WebMD has made us all doctors…
And Pinterest has made us all pastry chefs…
Do you see my point already?
Bill Jacobs and I share an alma mater in Northern Arizona University where we both managed survey projects for the Social Research Laboratory while obtaining our graduate degrees in Sociology. Although our time at NAU was over a decade apart, we were first connected to each other by a mutual mentor from the lab. Since then, we have shared in conducting hundreds of surveys for non-profit, as well as commercial, governmental and private clients.
From time to time we get to witness the horror of the “in-house” survey. It sort of reminds me of the local news every Fourth of July when you hear about all the amateur fireworks accidents. The perceived simplicity and relative user-friendliness of the survey tools mentioned above have resulted too many times in false confidence and ‘shoot from the hip’ research. Sometimes we’ll be called in to make sense of a bad survey, but the problem is that once the data collection is done, it’s too late. At best you have useless data. At worst, you’ve been making bad decisions off of bad insights.
So, how do you know if you should conduct your own survey? I’ve put together a list of top-line questions about the process. If you feel comfortable about your answers below, survey away!
• How do I create a probability sample? Stratified sample? Convenience sample?
• What are the implications of each of the above?
• How many responses will be enough for a representative sample?
• How many invitations will I need to reach my target response level?
• How do I calculate margin of error? What margin is acceptable?
• If I need market respondents, where can I recruit them?
• How do I know a survey is the right tool? Not a focus group or in-depth interview?
• What are the advantages of a phone or mail survey vs. a web survey?
• How long should a survey be? – What problems are caused by response fatigue?
• What is the difference between validity and reliability?
• What are common mistakes in survey question authorship?
• When do I need to use page, question, or response randomization?
• What are skip-patterns and branching?
• What is a double-barreled question?
• When should I use a ranking vs. a rating? Should I force rankings?
• What is the value of an open-ended question? Closed-ended?
• What is a Likert scale? What values are considered high or low?
• What industry standard questions are available on my topic?
• What questions offend the audience/are illegal?
• What must be included in a survey invitation?
• How many invitations should a respondent receive? How does this change the analysis?
• What subject lines and e-mail copy increase response rates?
• Do I need to provide a monetary incentive or gift for respondents? What value?
• How long will it take to reach the response levels I require?
• How do you deal with refusals?
Analysis & Reporting:
• What is response bias? How do I deal with it?
• Is there a difference between early respondents and late respondents?
• How do I identify and remove bad responses?
• What is a weighting adjustment?
• How can I tell if my response data is statistically significant?
• How do my responses compare to those of other organizations?
• How do I analyze large sets of qualitative responses?
• What is re-coding? Grounded methodology?
Non-profit organizations and marketing agencies trust experts like Analytical Ones to manage their survey research because there is much more to conducting a survey than just writing down some questions and looking at the answers.
If you can’t answer these questions, don’t be this guy:
Give us a call.