There is talk of an ethic of AI. But it’s really the ethics of information technology.

Many ponderous documents, issued by think tanks, universities and venerable organisations like the UN, the EU or the OECD, dwell on the topic of the ethics of Artificial Intelligence systems.

The cliché is more or less always the same.

After recognizing that AI is great technology with the potential to transform society and business, the reports instruct politicians, media and citizens that they should also be worried, because AI takes decisions based on inscrutable logic, contains biases, can sometimes be unsafe and unless properly directed will not ensure a fair and just society.

Another thing common to all such documents is the following:

All occurrences of “Artificial Intelligence” could be replaced by the word “software” without changing the meaning.

Why?

Because AI today is but another evolution of software technology, like it happened with high-level symbolic programming languages in the Fifties, database management systems or BASIC in the Seventies, or ERP in the Nineties.

Today we have machine learning, deep learning, neural networks and a bit of good ol’ knowledge-based systems: the mix also known as AI.

Tom Austin, the Analyst Syndicate founder, wrote a piece recently that can be used to illuminate the point. In Stop confusing AI with Intelligence, he advises that

«we would be better off not using the AI term in common parlance», because whenever it is used, «people start thinking about fictional capabilities, over-generalizing from narrow accomplishments».

I could not agree more. We should definitely abandon our «science-fiction-inspired images of AI» (ibid).

So I believe that the fora who are discussing AI ethics nowadays would all be better off saying computer technology ethics or information technology ethics. Or they could say Ethics of Software: there are no intelligent or automatic or autonomous devices these days, that are not programmed as software…

Look no further than the past few decades

Software has already been hugely transformative of business and society. It has already been magic in the past, even without the AI label attached to it.

At times it has also been evil. There is no shortage of disasters caused by information technology or software technology.

Automatic flight controls helped take down two scheduled passenger flights in less than six months between 2018 and 2019, causing he death of 346 people.

Precious spacecraft have been lost due to coding errors, like Mariner I and Climate Orbiter, or the European Space Agency’s Ariane 5, which had cost €7 billion to develop and reduced to ashes half a billion in satellite equipment when Flight 501 exploded.

In 2004, a new software installed at the UK Department for Work and Pensions caused the overpayment of 1.9 million people and the underpayment of another 700,000, plus a series of additional welfare malfuctionings, ultimately costing the taxpayer £780 million.

In August, 2012 a banal human error caused Knight Capital Group high-frequency trading software to go astray, costing $440 million and eventually killing the company.

The damage, risk and turmoil caused by hacker attacks of various kinds need not to be recalled here.

Inscrutable logic

All the above nightmares have as their ultimate causes coding errors, software bugs, hardware glitches or environmental interferences on computers.

Sometimes such causes are extremely difficult – when not practically impossible – to diagnose. But systems can be developed, deployed and used in ways that make catastrophic crashes very unlikely.

So why all this talk of “ethics of AI” instead of ethics of technology or ethics of automation or ethics of software?

One reason is that automation now portends to invade untouched spaces, such as sitting alone in the backseat of our car at highway speed or in New York City traffic, being visited by a robot doctor or talking to an intelligent sofa.

These scenarios seem closer to our personal sphere than software applications of the past. They evoke a Frankenstein sense and make it hard not to talk about, however imaginative or distant in the future they may be.

But then another reason is that

it is simply too cool to sit in a Committee on the ethics of Artificial Intelligence!

What to make of this all?

That’s perhaps a good thing.

The «science-fiction-inspired images of AI» make such an assignment incredibly sexy in the public eye: if we took them away, the Committees would be deserted and nobody would be taking care of the ethical and legal implications of software and related technology, like it happened in the past.

Do listen to ethical committees recommendations. And get involved. But let’s not confuse AI with Intelligence.