There was a time when technology hype was only instigated by marketing departments at vendor companies. Today it builds up already in scientific journals.

Like any human activity, science also suffers from some problems. Perhaps the main ones today are:

  • Ineffective methods for assessing research output, which has led to an explosion of poor or even fake (“predatory”) journals and some debatable academic careers in many countries
  • An increasing number of researchers with weak competencies in organizing experiments and analyzing results, leading to a reproducibility crisis first recognized at the beginning of the century

The possibility for third-party researchers to replicate a scientific study published by others is one of the founding pillars of science. If this constraint is removed, then no differentiation is possible between real science and fake science.

In some fields, such as epidemiology or large clinical trials or high-energy physics experiments, it becomes foolishly impractical to replicate experiments entirely. In these cases, a lower-level control strategy is applied, called reproducibility: data sets and computer code, produced by and employed for conducting the experiment, are made available to others for verifying published results and carrying out alternative analyses.

Turns out that by studying an experiment published in a serious journal and analyzing its data, one does not often come up with the conclusions of the original authors. The mismatch ranges from 85% in clinical medicine or psychology to 50% in computer science to 30% in Physics. Not encouraging.

Science hype

In recent times, a new annoying, albeit less malignant, phenomenon has emerged in science, that has a more direct impact on people who must make investment decisions about emerging technologies: plain hype.

Hype development in science looks like this:

(1) A scientific paper claims a finding of some magnitude, say ‘M’
(2) An ensuing press release about the article describes the finding as if it were three times as big: 3xM
(3) The media announce a 10xM breakthrough

1. Scientific paper

A scientific paper can create hype via a catchy and bombastic title overselling the finding, and an abstract just slightly less optimistic underneath. It happens in top-tier journals, let alone the lower-level ones or the thousands of predatory/fake ones.

As an example: Towards Prediction of Financial Crashes with a D-Wave Quantum Computer. (Wow!). The abstract plays it down a bit by admitting that the article just “paves the way” to analyze quantitative macroeconomics. Then reading the full text you will know that there are no quantum algorithms that can predict the next financial crash: only studies and hypotheses on how quantum computing could one day be used for that purpose, perhaps.

Often enough, only the actual full text below the abstract contains the real discovery or invention. But the public doesn’t see it, not least because they don’t have free access to it (the publisher may or may not offer to at least rent the article for a limited time for like $ 9.99). Hence, most readers will get the message from the title or the abstract. And oftentimes a scientific result of actual magnitude 1 is outlined in an abstract claiming 1.2 placed under a title claiming 3.

Another example: Generating conjectures on fundamental constants with the Ramanujan Machine. Srinivasa Ramanujan was a phenomenal mathematician, distinguished particularly for his talent in formulating new theorems. Giving his name to an AI algorithm that aims to invent conjectures and theorems is evocative and cool. But it also is an allusion to capabilities unproven in the paper: up to fifteen years ago Nature would not have allowed the use of that title as it did in February 2021.

The trend is indeed not discouraged by the publishers. Most revered scientific journals worldwide are owned by a handful of companies and these have a natural inclination to compete for a successful presence in all channels and contexts.

A few years ago, researchers studied medical publications in PubMed and found that the frequency of 25 positive words like “novel, “amazing”, “astonishing”, “unique”, or “unprecedented” had increased 9-fold between 1974 and 2014.

I bet the biggest increase occurred in 2005-2014, after scientific publishing became dominated by market forces.

2. Press release

Not rarely right after preprint, without waiting for peer-review and publication in a journal, many scientific papers are advertised by press releases issued by the authors’ institutions, often universities or companies with substantial R&D budgets.

The press release does its job, amplifying by some factor the importance of the paper’s findings. Its purpose is to compete with a multitude of simultaneous sources and topics in attracting journalists’ – and robots’ – attention in order to make it to the media.

3. In the media

The media, including the specialized/trade press, are subject to the same competition rules. And the competition is orders of magnitude larger, including many thousand technical blogs and social net influencers in computing, life sciences, finance, economics, physics, mathematics, you-name-it.

Most such sources are incompetent or even plain false, like splogs and flogs: but even a large portion of genuine bloggers and influencers, as well as journalists, will fall prey to the promotion that started with a catchy “scientific” title.

Add that all high-caliber newsboys of their own science have actively rebranded their product lines in recent times, like Holtzbrinck Publishing Group / Springer Nature (Nature) or Elsevier / RELX Group (The Lancet). Such big publishing groups master the art of branding and packaging with profit as the primary objective.

See for example ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures. This was a big late-2020 case pumped by a report in Nature News, a media outlet for the about 200 scientific journals of the Springer Nature group. Pieces on Nature News are (excellent) journalism but the media regularly take them as scientific papers because of the Nature name in the headlines.

So in that case, virtually everyone in the media assumed that AI algorithms had shown “general intelligence” in “solving” one of the toughest scientific problems of our time. I discussed it here for the Analyst Syndicate, reporting the opinions of real experts.

So what?

Scholars rely on publications not only as a fundamental step of the scientific method but also, unfortunately, as career means.

This sometimes creates the urge to make one’s scientific publications sensational, cited by many, and divulged massively even outside the research circles. Perhaps one in five papers is born this way. (Four in five if we include low-grade scientific journals in the count).

This is encouraged by the scientific publishing industry, as it is largely presided by (legitimate) business interests and attitudes.

Scientific dissemination is even more so.

The result is that when we read of a new scientific feat we should always be wary of the likely amplification that the actual result might have gone through. It can occasionally be large.

Many technology vendors, in all sectors from life science to artificial intelligence, will take advantage of this to embellish their product descriptions and perspectives, creating further hype.