New research seems to put us on the right path to removing one major source of risks in unattended AI systems like autonomous cars.

We know that Machine Learning today includes, in addition to its successes, a few worrying glitches like pandas inexplicably becoming gibbons, men mistaken for starlets, schoolbuses becoming snowplows, robots’ reasoning disrupted by stuffed baby elephants, and STOP signs interpreted as Speed Limit 45 by self-driving vehicles after minimal graffiti vandalizations.

Not a reassuring scenario for unattended, mission-critical AI systems, like for example autonomous vehicles.

Every now and then scientists think they are close to knowing why those adverse side effects happen, although most of the time they are pessimists, knowing that they don’t really know what they don’t know about the inner workings of Machine Learning / Deep Learning.

However these days we are in a good-mood period thanks to recent work by researchers scattered between Germany and Canada.

Shortcut learning

According to these researchers, the main reason for unpredictable Machine Learning mistakes is that deep Neural Networks sometimes follow unintended shortcut strategies which, while superficially successful, can fail under slightly different circumstances.

Examples:

  • using object location instead of shape, when given the task of recognizing objects. This can happen when both the shape of an object and its location (in the image being analyzed) are valid solutions under the training setup constraints, so there is no reason to expect the neural network to prefer one over the other;
  • recognizing objects not by their foreground appearance but by their background, their texture, or some other shortcut not always obvious to humans;
  • using both the foreground appearance and one of those other said features. E.g., looking at token types on X-ray scans and using them too, instead of just the desired features. This led a pneumonia-diagnosing AI to work fine only on X-rays coming from the same hospital where training had taken place (its scans contained the hospital logo), making huge mistakes on other hospitals’ scans.

In other words: coherently with our stubborn habit of calling this AI thing ‘intelligence’, Neural Networks are typically assumed, even by their own developers, to use the same criteria that we humans would use to accomplish a task.

But this can be illusory.

Life-scientists, for example, are familiar with animals tricking us by solving an experiment in an unintended way without using the underlying ability one is actually interested in: like rats navigating a maze following not colors but the odor of the paint.

Turns out that artificial neural networks, too, are capable of subtle shortcuts.

A path to the solution

The discrepancy between what humans believe the task should be and what Deep Learning models are actually incentivized to learn from examples, is the problem.

This seems to suggest that we should stop testing Machine Learning systems against the same data set used for training, and seek, in the words of Geirhos et al. “good out-of-distribution tests that have a clear distribution shift, a well-defined intended solution and expose where models learn a shortcut”.

This may take years to accomplish and extend to most AI/ML systems.

Furthermore, the authors warn that we may never be able to solve shortcut learning in AI systems completely: after all, Machine Learning is grounded on the very idea of generalizing from reduced information, so errors are always to be expected.

I guess our objective should be to make the probability of those errors smaller than it is for humans or for the autonomous systems that AI will replace.

Many other issues stand in the way of full-scale production AI systems in general (autonomous or not), including

  • devising convincing benchmarks against the existing systems they are intended to replace;
  • finding the enormous computing resources required without depleting the whole world’s electric energy and encapsulating ad-hoc hardware within general-purpose computers;
  • optimizing the compute performance of training algorithms;
  • achieving replicability or at least reproducibility of research results;
  • providing some explainability of solutions;
  • merging machine learning with symbolic reasoning, if aiming for the ‘broader AI’ that lies between today’s capability and the mythical, likely unattainable, ‘general AI’…

But, all in all, today it seems to me that the AI community has positioned itself on the right path to getting rid of one major fault in those specific production AI systems that are fully autonomous, that is their inherent unreliability.

So what?

This means that by 2025 we may have solved some of the major issues that are opposed to a reliable standalone AI.

This could be a substantial improvement, from then on, for autonomous vehicle technology as well as for a number of applications in manufacturing, healthcare, online media, and warfare.

2020 PAOLO MAGRASSI CC BY 4.0

I am the author of this article and it expresses my own opinions. I have no vested interest in any of the products, firms, or institutions mentioned in this post.