Unsolicited peer review exposes deeply flawed research
Subject: NewVantage Partners’ paper Data and Innovation: How Big Data and AI are Accelerating Business Transformation, 2019, by Randy Bean and Thomas H. Davenport
I discovered the subject report in my recent analysis of current Google Ads. This 2019 report (promoted via Google Ads in 2022) is the 8th in an annual series that started in 2012 and is still ongoing. The reports are based on yearly surveys conducted by NewVantage Partners – NVP. Survey questions change somewhat, year to year, and the survey sample appears to be a subset of NVP’s client base.
Research category assessment
The subject report falls into the commercial-grade survey-related content (C-SRC) category.
My expectations for C-SRC work are low. C-SRC delivers low or negative value for buyers because, as described in Detect & defuse the limits of survey-related advice, C-SRC works share four common flaws:
- Historical incoherency
- Lazy data gathering
- Inadequate sample definitions
- Under-appreciation of journal publishing standards.
Summary findings for the NVP report
The report’s primary value appears to be:
- Business-building clickbait for less-well-known firms which endeavor to attract leads by offering access to “thought-leadership” papers written by more widely known authors.
- Business-building clickbait for NVP itself. The subject report could attract executive prospects who are at a loss for new ways to compete in business, and its ambiguities could serve as an enticement to draw new clients to NVP to learn how to reinvent themselves.
All four flaws observed in the commercial-grade survey-related content category are present in the NVP report. Three additional, specific failings drive down my rating even further,
- Confounding bias. The respondents appear to primarily be a subset of the researchers’ client base, and the researchers’ positions likely influence their opinions. It’s not just the stream of Bean and Davenport’s publications. The researchers invite their clients to breakfast and dinner roundtables at which they discuss the content of the surveys and other similar material. The researchers’ actions are likely to directly influence the respondents’ answers to the surveys, no matter how the surveys are distributed. The magnitude of the confounding bias will tend to be lower if the individual respondents never go to any of the roundtables but even there, we might find a second-order effect within client firms.
- Lack of analytics. One of the two researchers, Thomas H. Davenport, is, by reputation, a thought leader on “Competing on Analytics.” The absence of analytics in the report is shocking.
- Lack of any coverage of methodology. Hence, ambiguity pervades almost all of the survey findings. Most C-SRC work provides at least a minimal description of methodology.
Please don’t ignore the implication of the four main flaws of C-SRC work either:
On historical incoherence, this paper presented data without a single shred of broader historical context.
On lazy data gathering, the survey appears to be an exercise in confirmation bias, not a rigorous and unbiased attempt at discovery. Plus, there’s no statistical analysis, but there’s no experimental work to test the authors’ conclusions either.
On inadequate sample definitions, the NVP’s research paradigm feels more like a signal-amplifying echo-chamber than anything else. Do not ignore the generalization failures in this report. Because of them, it’s unlikely the results are directly relevant to you or your enterprise.
- Results from this “survey of convenience” cannot be generalized beyond the specific firms that responded to the survey (if that – see confounding bias above.)
- The tiny sample (63 responses in 2019) was dominated by extremely large financial services firms.
- Healthcare? The report noted twice as many healthcare respondents vs. the prior year, but thirteen respondents aren’t enough to draw meaningful conclusions. Factual data (from over 93 million internet job postings) suggests the researchers’ embrace of healthcare’s enthusiasm for AI investments is misplaced.
On under-appreciation of journal standards, there’s no evidence of sufficient transparency to allow replication attempts or enable independent peer review.
As I said in Detect & defuse the limits of survey-related advice:
Without quality independent peer review, quality suffers. And so do you.
- You can do it yourself! Refer to this set of articles to get more depth, as a reader, on how to review survey-related research:
Start with a simple framework for unreasonably great research. Then work your way through three follow-on articles on components of that framework: the organizations and people involved, the science of sampling, and how the survey was designed and executed.
- Or you can ask me for my opinion on the quality and value of the survey-related research claims you’re looking at. I write unsolicited peer reviews. No doubt, eventually, other authors will do the same to me. We can collectively raise the quality and impact of research this way if we keep at it.
Take me up on that offer.
© 2022 – Tom Austin — All Rights Reserved.
This research reflects my personal opinion at the time of publication. I declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.