In Discover unreasonably great research! And exploit it, I introduced the concept of Unreasonably Great Research, characterized it briefly via a framework, and offered two examples.

This note is the first in a framework-defining series exploring the various elements of the unreasonably great research framework. Here, you and I will use the attributes of honesty and transparency and explore the fundamentals of evaluating the actions of the organizations and people behind research that appears to be survey-based. (Not all research that appears to be survey-based actually is survey-based.) 

The series is a prelude to a report card to help you quickly and reliably judge the quality of the numbers and recommendations flooding you, sources of which include the media, vendors, consultants, analysts, peers in your firm and industry, academics, and other authors.

Reference Framework

Unusually great survey-based research is:

Honest and transparent, scientific, well-designed and executed, based on very large-scale surveys using validated methods, sampling data from a majority of enterprises in the nation, and passing the rigorous tests typically applied to scholarly work (including but not limited to independent peer review.)

Contextually framed, starting with extensive literature reviews in economics and other social sciences, encapsulating the history and prior art related to the specific area under study.

Not bound by context or conventional wisdom, it often deviates from or redirects earlier findings, conclusions, and beliefs.

Research that fits well into this framework isn’t perfect. Of course not. But it’s phenomenally better than most of what’s usually pumped out by the technology-industry-complex.

Note Structure

    • Actions to take
    • Definitions of terms
    • Background discussion
    • Application of actions
    • What comes next

Action to take

These questions form the basis of evaluating organizations and people as viewed through the high-level attributes of honesty and transparency. I’ve pushed the definitions and background discussion later in this note. Dive deeper first.

Expectation setting: Some people will answer some of the questions posed in this series of framework-defining notes. Do not expect most to answer these questions. These are questions for you to consider, and this should be an essential part of your due diligence efforts.

Examine the organization and people offering the report.

  • Trust: Do you trust them? Do you trust the report and claims in the report? How confident are you of those two assessments?
  • Integrity: Do you know their motives, intentions, and competencies? Have you evaluated the business and financial incentives that might introduce unstated biases in the analysis? Are potential conflicts of interest exposed and discussed? Who has sponsored the research and the report on the research? A single entity? A small group of entities? A broad group?
  • Split trust: If a large, established, and world-renown consulting firm cites a third-party survey-based study, do you automatically trust the third party’s findings and recommendations? The consulting firm’s findings and recommendations? How far do you trust them? How do you test them?

Examine the entity doing the survey design, testing, execution, analysis, and reporting.

  • Reputation: Look for awards, certifications, employee reports, references from people you know, and third-party references but don’t overweight reputational content.
  • Centrality: Is this type of research central to the organization’s regular and ordinary work? How important is it to their business?
  • Volume: How extensive is their experience in doing survey-based research? Look for a steady, heavy stream of press releases about survey-based research studies they’ve done.
  • Specificity: How much of that experience is in the specific domains that are central to the study at hand? Are the press releases scattered across many different domains, or do a significant number appear clustered in areas directly relevant to the study at hand?
  • Focus: Are the findings tangential fallout from another study or central to the entire survey?

Examine your role.

  • Involvement: Do you delegate this analysis to others, take it on yourself, or ignore asking such questions?

Definitions of terms

Honesty implies a spectrum of virtuous attributes such as integrity, truthfulness, trustworthiness, loyalty, fairness, and sincerity. It’s bound tightly to reputation, which is fed by experiences you and others have had.

Transparency is a metaphor for visibility and openness, and it’s the opposite of opaqueness or hiddenness.

Evidence of honesty and a proper level of transparency feed reputation and build trust, which should raise your confidence in the research you’re examining. The opposite is also true.

Background Discussion

The research you read is almost never completely transparent. In many ways, privacy is paramount. Proprietary information and trade secrets are the lifeblood for most enterprises, and that can bleed over into the general opacity at some firms.

Privacy rights take precedence over transparency. If anonymized data can be successfully deanonymized, providing access to the anonymized data can result in privacy violations. Beyond the risks associated with transparency, there are setup and transaction costs for maintaining an appearance of transparency.

There are significant differences in transparency based on industry, geography, culture, and professions of involved parties. Two examples:

    1. Academic communities: the closer researchers are to the academic community their work relates to, the more they expect as much transparency as they would have if they were in academia.
    2. Industry/Geography: AI engineers at many firms in Silicon Valley demand that they have the opportunity to share information on their research with others in their specific disciple. If they’re unsatisfied, they’ll walk across the parking lot and go to work for a competitor, or so the thinking goes.

A deeper dive on transparency: Later note in this framework-defining series will focus on the design and execution of survey-based research and include a deeper dive into related transparency issues.


Let’s briefly use the actionable questions to evaluate the two studies I highlighted in Discover unreasonably great research! And exploit it, namely The power of prediction: predictive analytics, workplace complements, and business performance, April 2021 by Erik Brynjolfsson, Wang Jin & Kristina McElheran  and Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey, December 2020 by Nikolas Zolas, Zachary Kroff, Erik Brynjolfsson, Kristina McElheran, David N. Beede, Cathy Buffington, Nathan Goldschlag, Lucia Foster, and Emin Dinlerso

    • Trust: Extremely high.
    • Integrity: Extremely high.
    • Split trust: Not applicable here.

Examine the entity doing the survey design, testing, execution, analysis, and reporting.

    • Reputation: Extremely high
    • Centrality: Extremely high
    • Volume: Extremely high
    • Specificity: Extremely high
    • Focus: Extremely high

Examine your level of involvement.

    • Involvement: I did this analysis myself, and I am seeking peer-review of this paper from others.

Net assessment

On the organizations and people dimension, these two papers have earned my highest regard and I see no obvious flaws or shortcomings in terms of either honesty or transparency.

This is an assessment of only a small part of the framework and it is only part of the overall report card which I’ll use to evaluate other reports in future research.

What comes next?

The next paper in this unreasonably great research framework series will focus on the Scientific attribute. What does it mean, and what are its implications for findings and recommendations from survey-based research.

(c) 2022 Tom Austin, All rights reserved