amazing question
Amazing Survey Questions Image by Gino Crescoli from Pixabay

Survey design & execution in the “unreasonably great research framework”

What do you need to know about survey design and execution to make better business decisions? Here is the third note in my framework-defining series[1] exploring the framework that I introduced earlier in Discover unreasonably great research! And exploit it.

Context: Knowing more about how to create quality survey-based research will help you better evaluate the quantitative research (numbers, graphs, tables, charts), survey-based research, that vendors, consultants, analysts, and others include in their business proposals to you.

Imperatives from the first two notes in this series:

    • Looking through the lens of honesty and transparency, evaluate the people and organizations offering you quantitative data supporting their proposal. Include the work outsourced to particular service providers (like polling firms.) Evaluate their motives, trustworthiness, integrity, reputation, and how central their proposal is to their overall business — also, consider the volume of relevant experience that’s specifically focused on your particular needs.
    • Is this legitimate research or marketing? First-principles: Quantitative research should be well framed, well planned, and designed to advance our knowledge. It should aspire to meet or exceed the standards to which academics are held. Does it convey a sense of history (literature review) and test a clear hypothesis based on that review? Is the target population well defined and the sample well aligned to the target population? Where did the sample come from? How was it incented? What response rates were observed? Was the sample prescreened? Did the research undergo independent peer-review or not?

In later research, I’ll:

  • Dive into the essentials for interpreting results, drawing conclusions, and identifying issues and gaps.
  • Apply the key concepts from all five framework-related notes to valuable operational questions you might be facing, such as “How can predictive analytics improve business performance?”
  • Deliver a report card to apply to research you may be reading, proposals you might see, and claims you might hear.
  • Publish report cards evaluating key academic, popular, and marketing survey-based claims related to valuable operational questions and areas of general market uncertainty.

Summary of this note

How was the questionnaire itself, the survey instrument, created, tested, and tuned? Was it a snapshot in time or a comparison across time? How were the question items created? Brainstorming? Focus groups? Reuse of questions in earlier studies? How was it tested and tuned? For example, what validity and reliability testing was performed? How did they test for respondent fatigue? Response rates impact the overall quality of the results. What were the response rates? How were incomplete surveys and missing data treated?

Main body

Surveys can take many forms (e.g., interactive, written, face-to-face, or web-based), but in almost every case, there is a survey instrument, a full script that contains question items, instructions, flow control, and supporting material. It’s critical to understanding and analyzing how the survey worked.

The survey instrument is not readily accessible in most cases, particularly for non-academic work.

Real-world data for peer-reviewed academic research in medical journals.

In a review of 117 published surveys, few provided the questionnaire or core questions (35%), defined the response rate (25%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%) or identified how missing data were handled (11%).

Source: Reporting guidelines for survey research: an analysis of published guidance and reporting practices[2](2010) Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, Grimshaw JM

It is far rarer for commercial research products published by analysts, management consultants, and sellers to include a complete list of all questions asked. It is almost unheard of to see automatically accessible, full survey instruments accompanying those results.

Ask for it. If you can’t have access, you will have less information on which to evaluate the survey-based data (but below, there are other questions you can ask that can compensate for the missing information.)

More Detail

Survey instruments contain question items (the queries to ask and a way the respondent is supposed to answer them, e.g., Yes/No, Likert-scale, and open-ended text.) There’s more to it than creating question items. The instrument also contains

    • Instructions to both the respondent and, if present, an interviewer)
    • Flow control, describing how to navigate from one item to another – e.g., conditionally skipping some questions
    • Supporting material – such as definitions – and actions, for example, what to do if the respondent seems uncertain on a question or questions.

In commercial research, vendors, analysts, and consultants often refuse to share the survey instrument with prospects and their advisors, claiming it is proprietary, a competitive advantage, a secret, or otherwise against their policy.

That’s nonsense.

If they don’t want you to see the detailed script used in the survey, that has a bearing on how much you should trust the people, the results they’re showing you, and the proposal they’re justifying, at least in part, with the (perhaps now suspect) data.

Probe further but respect the goal of this line of research 

Use the three question blocks below as the basis for additional questioning to help you evaluate the quality and value of proposals and plans based on survey-based research. If you want to go deeper, you might want to start here but don’t confuse goals. It’s not my goal to give you enough information to excel at doing survey-based research, only to excel at evaluating the quality and value of such research. I also want to be transparent — I will use these guidelines in evaluations of survey-based research I publish.

  1. Survey-instrument creation process

Who created it? How?

Did the researchers copy some or all of the content from earlier survey instruments? That implies they are trying to compare changes in responses over time (aka longitudinal research.) That’s good, providing they used the exact same survey instrument each time they ran the survey. How much of the survey instrument did they change in subsequent runs of the annual study? Substantive changes in the survey instrument can invalidate longitudinal comparisons.

How were new question items generated? Internal brainstorming? User group or customer input? Did the item generation process involve external focus groups?

Skim through this outline-based slide set to envision the breadth of details that impact survey quality, starting with item creation.

  1. Survey instrument testing and tuning

Tuning and testing apply to the individual survey items as well as the survey instrument as a whole.

Survey instruments need to be tested before they’re launched, and the tests should be run using people who are representative of the target population. What did the researchers do?

Detail

Testing needs to cover the wording of individual items on the questionnaire as well as the overall flow of the process that respondents will encounter.

How did the researchers tune the clarity, validity, and reliability of the questionnaire? Did they measure test-retest or inter-item consistency? The sequence in which questions are asked can inappropriately influence answers received. How did they adjust the flow to minimize sequence effects?

Might the instructions to respondents, item descriptions, definitions, or questions themselves bias the survey’s outcome?

If the survey was interactive (e.g., face-to-face, telephonic, video-based, or via a text-chat interface), how were the interviewers trained? How were they tested and monitored while conducting interviews to ensure their cognitive biases didn’t affect respondents’ answers?

Any questions unneeded to test the hypothesis behind the research and characterize the essential demographics of respondents should be removed. A failure to trim aggressively leads to respondent fatigue, reduced survey completion rates, poorer quality data, and a temptation to do data-dredging.

  1. Response rate and completion rate

How many people were invited to take the survey? How many began it? How many incomplete surveys were returned? (Incomplete paper surveys are rarely returned if there are no special incentives, but incompletes can be captured automatically with electronic or telephonic surveys.) Were incomplete surveys thrown out? How did the researcher deal with missing data?

Did response and completion rate numbers drive any corrective action to improve them?

Increasing the number of people invited shouldn’t have an effect on response or completion rate, only the total number of surveys returned.

Detail

Response rate is the number of respondents divided by the number invited to take the survey.  Completion rate is the number of surveys completed divided by the number started.

The lower the response rate, the less the likelihood that the sample accurately matches the target population, in which case testing alignment between the sample and the target population becomes essential

The lower the completion rate, the more likely the survey instrument wasn’t adequately tested and tuned, or the hypothesis under test was too complex.

Methods to increase response to postal and electronic questionnaires (2009) by PJ Edwards reviews a broad range of alternatives. Shorter questionnaires and various incentives seem to stand out.

Next in this series

Analyzing and drawing conclusions from the data.

©2022 Tom Austin — All Rights Reserved

[1] The framework was originally defined in https://thansyn.com/discover-unreasonably-great-research-and-exploit-it/ and has been followed by three (of five planned) notes in a series further defining the framework. The first in the framework-defining series is this https://thansyn.com/organizations-and-people-in-the-unreasonably-great-research-framework. The second is this https://thansyn.com/science-in-the-unreasonably-great-research-framework/.  This note is the third.

[2] See Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices (2010) Bennett, et al https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3149080/  and How to assess a survey report: a guide for readers and peer reviewers (2015) Burns and Kho https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4387061/

 

 

Disclosure

The views and opinions in this analysis are my own and do not represent positions or opinions of The Analyst Syndicate. Read more on the Disclosure Policy.

Leave a Reply