The quality and impact of survey-related analysis and advice ranges from unreasonably great to hopelessly inadequate. Anytime you encounter quantitative claims, be they in sales presentations, advertisements, brochureware, analyst whitepapers, or articles in trade, business, or public press, you have a challenge: How much of what you see should you believe?

  • Is this just frivolous numerical decoration in an opinion piece? (Most technology-related advice is embellished with numerical decoration.)
  • Is it solid data and conclusions derived from high-quality surveys?
  • Something in-between?

This article is focused on the one particular class of survey-related content that I come across most, commercial survey-related content, C-SRC.

The business value of C-SRC pushed at buyers is low, and sometimes negative. In this article, I analyze C-SRC’s four essential flaws and describe two workarounds for you to consider when faced with quantitative claims that may be relevant to your business investment decision-making.

Essential Flaws

C-SRC fails on at least four essential counts: historical incoherency, lazy data gathering, inadequate sample definitions, and under-appreciation of journal publishing standards.

Be alert to:

1. Historical Incoherency

Survey-related content is incoherent when the authors don’t fit their work into the broader body of information and knowledge in the field they’re trying to cover. Warning signals: No foundation, no literature review, or other historical documentation.

Better survey-related content coherently describes how it’s built on or challenges the accumulated knowledge in the area.

C-SRC creators show no regard or respect for the findings of others. If they cite prior work at all, they cite only their own work, no one else’s. They’re focused on proving they’re smarter and more valuable than anyone else, and they don’t want to clutter your mind (or theirs) with anything other researchers have done.

Coherency is a cost issue: better quality work costs more, and too many business decision-makers don’t demand it, so they are stuck with content from many sources, all in irresolvable conflict. Have you been there often, trying to reconcile conflicting advice based on allegedly objective data? I bet you have.

Without a coherent, validated, documented history, researchers can make up whatever assumptions, models, and problems they want. Great research rewrites history, so it isn’t sacrosanct, but research presented without the historical context of relevant earlier work is grossly inadequate.

2. Lazy data gathering

Three major issues here: statistical, experimental, and psychological.

Surveys formulated just to collect data aren’t doing enough. Authors of survey-related research need to have a hypothesis they’re testing, contesting, or extending. Otherwise, they’re probably guilty of data dredging or falling for the Texas sharpshooter fallacy or other analytical flaws that violate their statistical tests’ assumptions. If, that is, they’ve done any statistical analysis.

Hypotheses created after the data has been collected are the start of a research project, not the end. Just collecting and presenting data deprives everyone of the critical tested insights that could have come from empirically testing the hypotheses that emerged from the data. Instead, many authors slip in untested explanations and theories as though they were facts, corrupting the content’s value.

Confirmation bias is a worse data gathering flaw. Some analysts begin with an assumption and select data that supports their hypothesis, turning it into their conclusion and the basis of their recommendations. Look for evidence of conscious or unconscious selective data filtering and comments that discount alternative conclusions without adequate foundation.

3. Inadequate sample definition

Good surveys are designed to collect information from a relatively small number of people who represent a larger population, so the results are generalizable to the target population. Most surveys you’ll find in opinion pieces fail here.

Survey samples are either scientific or non-scientific (also known as “samples of convenience.”) Surveys you find in many investment analyses rely on non-scientific samples: their results aren’t generalizable to a larger population. But authors often generalize nonetheless, which can lead investors to make bad decisions.

Suppose you encountered a survey of 50 CIOs at the top US financial services firms. What would you think if the authors generalized the sample’s data to all firms with at least 10,000 employees in all industries in the G20? That generalization is indefensible on several dimensions, including firm size, industry, and geography.

The next time you look at a piece that quotes a survey that the organization conducted or commissioned, examine whether or not the sample they used is really representative of the larger marketplace. Odds are, they don’t provide enough information to answer that question, so you should have second thoughts about believing their conclusions and recommendations.

4. Under-appreciation of journal publishing standards

Journals have many standards for survey-related research, among them: full disclosure of all pertinent survey details and analytical methods as well as peer review by two or more independent experts. Articles based in whole or part on commercial-grade survey-related content typically fail to meet most standards that journals use to decide whether or not they should publish the work. You should be almost as picky.

Independent peer review – IPR – is the final line of defense against lousy, low-quality research. Without IPR, you get:

    • Opinion disguised as fact, and fads and memes masquerading as substantive trends.
    • Correlations confused with causations.
    • Great exclamations (claiming, for example, massive acceleration and adoption or, less often, the opposite – without adequate evidence)
    • Recommendations and conclusions unrelated to the actual research project. You have to sleuth out the disconnects.
    • Authors with samples of convenience inappropriately projecting results to the entire population.

In most cases, you’re not going to see any evidence of independent peer review of survey-related analysis and advice, so what should you do?

Workarounds

1. You can do it yourself! Refer to this set of articles to get more depth, as a reader, on how to review survey-related research:

Start with a simple framework for unreasonably great research. Then work your way through three follow-on articles on components of that framework: the organizations and people involved, the science of sampling, and how the survey was designed and executed.

2. Or you can ask me for my opinion on the quality and value of the survey-related research claims you’re looking at. I write unsolicited peer reviews. No doubt, eventually, other authors will do the same to me. We can collectively raise the quality and impact of research this way if we keep at it.

Take me up on that offer.

Tom Austin

© 2022 – Tom Austin — All Rights Reserved.
This research reflects my personal opinion at the time of publication. I declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.