Expect more-powerful AI applications
Machine Learning, AI’s most successful enabling technology, has not unfolded its full potential yet. The 2020s will show us more of what it can do for business and society.
In most situations, the primary goal for developers of Machine Learning/Deep Learning software is to build a model.
A model is a software program that can be trained to learn patterns – like those contained in images, sounds, spoken/written words, or atmospheric perturbations. This experience is then used to figure out like patterns in novel situations. That’s how ‘AI’ recognizes people, translates languages, or chats with us.
A typical Machine Learning model today is built as an Artificial Neural Network. It is a piece of software that can be visualized as a complicated graph with many nodes, each receiving inputs and producing outputs. The inputs to each node are weighted before computing an overall result from the Network.
(The graph is not just horizontally large. It can also be several layers deep, in order to refine findings and conclusions at varying levels of abstraction. That’s why today’s prevailing form of Machine Learning is called Deep Learning.)
The values of the individual weights change as more information is acquired, like when learning from examples or from external events.
Neural network training
The most common use of neural networks in business so far has been like this:
A model is coded and the initial weights are assigned arbitrarily, e.g. randomly
training data starts being fed to the model, and
the model’s weights are tweaked by the program (a complex blending of computational statistics and mathematical optimization algorithms) …
… until a predetermined mathematical function is optimized
One different use of more or less the same set-up is the following:
A pre-trained model is kept fixed: optimized weights are left unchanged during subsequent ‘training’ runs
new input ‘training’ data is tweaked…
… until a predetermined mathematical function is optimized
What you are doing here is not teaching a computer program how to behave in the future: you are testing the validity of a model, a conjecture of your own, using ‘Big Data’.
In scientific research, this has been done for several years and Deep Learning promises to be a useful tool of the trade for scientists (provided that it is not blindly abused: the solutions are not ‘in the data’, they are in scientists’ brains).
Some scholars see these approaches simply as sophisticated statistics, and perhaps they are. But at times the scale of a method, such as in this case the richness of the data and the depth of processing, can produce a qualitative change.
Example: Pharmaceutical research
In pharma research, for example, it is useful to predict how molecules will behave under certain stimuli. Without Machine Learning, this entails writing software to simulate the behavior of atoms within (complex) molecules. The relevant calculations take days or weeks for every run.
An alternative to applying physics equations and analytically solving them is to build Neural Network Models representing possible explanations of experimental results, then feed them with actual results until the network is optimized.
This is what GlaxoSmithKline is doing with one of the most powerful Deep Learning computer chips available, Cerebra: “Uniquely at GSK, we now do experiments with the express purpose of improving machine learning models”.
The media, including the trade press, tends to refer to these works with sensational expressions, like “AI Designs A New Molecule” or “AI Proved Long-Standing Number Theory Conjecture”. But it’s actually all about collaboration between scientists and ‘AI’ software.
One field of Machine Learning that has greatly boosted new approaches is Generative Adversarial Networks, GANs, a technique formalized in 2014 by Ian Goodfellow, Yoshua Bengio, and colleagues.
After looking at the dataset normally used to train a Neural Network to recognize patterns, a GAN is able to figure out new data that obey the same statistics as the original dataset.
That’s why a GAN can, for example, create out of a large set of human photographs a new dataset containing additional and credible pictures of imaginary people: they are unreal but their photographs are statistically coherent with those of real people.
If you think about it, you can recognize in this situation a method of hinting new hypotheses or new explanations of a known phenomenon.
Like Machine Learning is regarded by some as statistics in a sophisticated modern dress (see above), equally these hypothesis-testing methods can be found in already mature ML/AI applications, like for example recommender systems. However, results so far have not been particularly impressive, while the future potential is clear from the recent scientific uses of Deep Learning.
Machine Learning/Deep Learning, AI’s most successful enabling technology today, has not yet unfolded its full potential for business and society. Novel uses of it are spilling out of science labs.
Imagine developing two different marketing campaigns then use real business results to see which is more effective. Imagine doing the same with financing alternatives. Imagine experimenting with social policies in vitro before actually implementing the most promising ones.
Some issues, like those outlined here and here, still have to be resolved. Furthermore, the next wave will require substantial reskilling of people at various enterprise levels. It will therefore take at least three to five years for new uses of Machine Learning to start appearing in the mainstream.
But for sure we can expect new and powerful business and societal ‘Artificial Intelligence’ applications in the mid-term future. In most cases, they will entail cooperation between human agents and AI systems.
2020 PAOLO MAGRASSI CC BY 4.0
I am the author of this article and it expresses my own opinions. I have no vested interest in any of the products, firms, or institutions mentioned in this post.