Most analysts and consulting firms rely on sloppy, biased, junk science to generate numbers to support their findings and recommendations.

I’m dissecting published research, looking for reports that guide readers on the uses and value of predictive analytics, studies built on high-quality data, objective analysis, and actionable conclusions. Given the reported high levels of spending and spending growth on analytics of all forms, that’s not too much to ask for, is it?

Few reports objectively deliver real value. But some gems, some unreasonably great works, do exist. They make expert crystal ball gazing and juiced and jived marketing surveys obsolete, particularly if you know how to analyze and use them. I want to help you do that over the next several pieces I write.

Start with a Great Research Framework

Unreasonably great research – which does exist – fits the following framework. It’s

    • Honest and transparent, scientific, well-designed and executed, based on very large-scale surveys using validated methods, sampling data from a majority of enterprises in the nation, and passing the rigorous tests that are typically applied to scholarly work (including but not limited to independent peer review.)
    • Contextually framed, starting with extensive literature reviews in economics and other social sciences, encapsulating the history and prior art related to the specific area under study.
    • Not bound by context or conventional wisdom, it often deviates from or redirects earlier findings, conclusions, and beliefs.

I don’t think research that fits in that framing is perfect. Of course not. But it’s phenomenally better than most of what’s pumped out by the technology-industry-complex[1].

Unreasonably Great Research Example 1.

Here’s an example of unreasonably great research on predictive analytics,

The power of prediction: predictive analytics, workplace complements, and business performance, April 2021 by Erik Brynjolfsson, Wang Jin & Kristina McElheran

This research results from an outstanding collaboration between academics, the U.S. Department of Census, and the National Bureau of Research (NBER.) Consider this quotation:

At the firm level, managers struggle to close the gap between the promise of predictive analytics and its performance…Some firms see no benefit at all…Adoption may be widespread, but business gains are not.

In all honesty, many reports and proposals make similar statements, but as prelude to a proposal for how the commercial firm will, of course, fix this problem for you if only you commit to a significant contract with the. In other words, those findings are fish bait.

Unreasonably great research is not fish bait.

In a simplistic sense, it’s data about data and analysis of that data, and it’s deep: over 30,000 American manufacturing establishments were surveyed in Brynjolfsson et al (2021) on their use of predictive analytics and detailed workplace characteristics.

Here’s the paper’s abstract (I’ve broken the abstract’s single paragraph into bullets to improve readability)

    • Anecdotes abound suggesting that the use of predictive analytics boosts firm performance. However, large-scale representative data on this phenomenon have been lacking.
    • Working with the Census Bureau, we surveyed over 30,000 American manufacturing establishments on their use of predictive analytics and detailed workplace characteristics.
    • We find that productivity is significantly higher among plants that use predictive analytics—up to $918,000 higher sales compared to similar competitors. [The “up to” phrase is deceptive…this is an excellent paper, but it’s not perfect.]
    • Furthermore, both instrumental variables estimates and timing of gains suggest a causal relationship.
    • However, we find that the productivity pay-off only occurs when predictive analytics are combined with at least one of three workplace complements: significant accumulation of IT capital, educated workers, or workplaces designed for high flow-efficiency production.
    • Our findings support claims that predictive analytics can substantially boost performance, while also explaining why some firms see no benefits at all.

Unreasonably Great Research Example 2.

Here’s another example of unreasonably great research,

Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey, December 2020 by Nikolas Zolas, Zachary Kroff, Erik Brynjolfsson, Kristina McElheran, David N. Beede, Cathy Buffington, Nathan Goldschlag, Lucia Foster, and Emin Dinlersoz

This paper resulted from a collaboration between several academics, private industry (Burning Glass Technologies), the U.S. Department of Census, and NBER.

Here’s their abstract (also broken into a bulletized list):

    • We introduce a new survey module intended to complement and expand research on the causes and consequences of advanced technology adoption.
    • The 2018 Annual Business Survey (ABS), conducted by the Census Bureau in partnership with the National Center for Science and Engineering Statistics (NCSES), provides comprehensive and timely information on the diffusion among U.S. firms of advanced technologies including artificial intelligence (AI), cloud computing, robotics, and the digitization of business information.
    • The 2018 ABS is a large, nationally representative sample of over 850,000 firms covering all private, nonfarm sectors of the economy.
    • We describe the motivation for and development of the technology module in the ABS, as well as provide a first look at technology adoption and use patterns across firms and sectors.
    • We find that digitization is quite widespread, as is some use of cloud computing. In contrast, advanced technology adoption is rare and generally skewed towards larger and older firms.
    • Adoption patterns are consistent with a hierarchy of increasing technological sophistication, in which most firms that adopt AI or other advanced business technologies also use the other, more widely diffused technologies.
    • Finally, while few firms are at the technology frontier, they tend to be large so technology exposure of the average worker is significantly higher.
    • This new data will be available to qualified researchers on approved projects in the Federal Statistical Research Data Center network.

More unreasonably great research is likely coming from some of the authors above.

I’m impressed by both studies, particularly in comparison to the typical reports found in the marketplace, reports based on far less qualified, often quite defective, surveys. Don’t assume that all surveys can be as well done as the two studies referenced above. It’s just not practical. But there are many ways to improve on the lot.

Objections to Brynjolfsson et al (2021) and Zolas et al (2020)

Is it appropriate to compare rigorous academic research to the “junk science” of the market reports littering the Internet? Yes! But is it fair? Of course not. People shouldn’t corrupt their processes with data of dubious quality from various newsletters, advertisements, and commercial proposals.

Is the answer for business decision-makers and their staffs to fundamentally depend on academic papers? That’s highly impractical from three viewpoints: time, skill, and interest. Most executives lack the skills to understand the nuances of this kind of analysis. They also lack interest in personally developing the skills. And they lack the time to do it. (Which category are you in?)

Follow up

Collectively we buyers, sellers, advisors, and observers need survey standards and ratings that mere mortals can turn to for assistance in quickly judging practical-level survey-based work. I’m working on that. I have several related pieces on my agenda, including

    • Exploring (defining and fleshing out) all the elements listed as part of the “Great Research Framework” (above)
    • Comparing the results of Brynjolfsson et al (2021) to the top search returns found from Google Search and Google Scholar for the string “Can predictive analytics improve business performance?”
    • If and when Zolas, et al (2020) is repeated with updated survey data on technology adoption rates, comparing their results for AI and related topics to the claims found reusing the same top Google Search and Google Scholar search return approach.
    • A report-card type tool for non-academic readers to use to rate for themselves the quality of the survey-based research that they’re reading.
    • An update on my work on “Technology-Industry-Complex” and the great hype-selling machine it runs.

(c) 2022 Tom Austin, All rights reserved

Endnotes

[1] https://thansyn.com/confronted-with-too-much-tech-fud-and-fomo/

The Technology-Industry-Complex (TIC) works hard at speeding the rate of adoption of new technologies well before they’re ready for market.

    • Who’s part of TIC? It’s not just suppliers of goods and services. It also includes vendors, investors, incubators, analysts, consultants, business schools, the trade, business and general press, and, to a lesser degree, academics.
    • Goals? Expand their markets as quickly as possible.
      • At the micro level, the fundamental mission of TIC is to get buyers to part with more money more quickly than they should or otherwise would.
      • At the macro level, its primary mission is to artificially inflate the perceived market momentum for specific goods and services.
    • Methods? Shade the truth, distort surveys and data, selectively filter experiences, and put the twin fears of imminent destruction (FUD — fear, uncertainty, and doubt) and the fear of missing out (FOMO) squarely into the consciousness of everyone they can reach.

Errata: moved hyperlinks from end-notes to embeddings in the text, 3 April 2022.

Tom Austin

© 2022 – Tom Austin — All Rights Reserved.
This research reflects my personal opinion at the time of publication. I declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.