Non-Human Artificial Intelligence

Non-Human Artificial Intelligence

We may not have Artificial General Intelligence for centuries, but we could have Non-Human Artificial Intelligence much sooner. 

Paul Dirac, who sat on Newton’s chair in Cambridge at 29, was one of the greatest physicists of all time. “His great discoveries were like exquisitely carved marble statues falling out of the sky” (Freeman Dyson).

But like Shaun in The Good Doctor TV series, Dirac could not understand jokes or understatements and was known for extreme reticence, literal-mindedness, lack of empathy, and rigid patterns of behavior. “The joys of daily life flummoxed him” (The Guardian), and he was constantly “balancing on the dizzying path between genius and madness” (Albert Einstein).

When he first met younger colleague Richard Feynman sitting next to him at a congress, he asked him: “I have an equation. Do you have one too?” (The two are pictured in the photo above, taken by Marek Holzman and used in the Physics Today cover page of August 1963).

I could give countless examples of extremely ‘intelligent’ people who lack common sense and behave in very weird ways.

Kurt Gödel
Kurt Gödel, 1925

Kurt Gödel, who at the age of 23 changed the face of logic and mathematics, starved to death in the Princeton campus in 1978 for fear of being poisoned. When applying for his US citizenship in 1947, he had risked being thrown out as he started explaining that he had found a flaw in the Constitution text, and was saved by his friend scientists – including Einstein – who had accompanied him.

Common people who met them on the bus or briefly chatted with them, would probably not call Dirac or Gödel ‘intelligent.’

Artificial Intelligence

AI bears some remote resemblance to Paul and Kurt.

AI lacks common sense. It only works in tiny super-specific contexts. It can not use the skills acquired in domain A in a cognitively-distant domain B. AI cannot generalize, analogize, feel, be embarrassed, understand jokes, jump over a fence for the fun of it.

But who said that every Intelligence in the universe should mimic human intelligence? If extraterrestrials landed tonight, are we going to expect from them our same brains and mentality?

Machine Learning – in its various manifestations – can acquire skills matching or surpassing some human results in a number of tasks.

Notwithstanding the recurring deceptive metaphors like “neural” network or “learning”, Machine Learning does not resemble human intelligence and misses some obvious human traits, like common sense or context understanding.

But it can translate between 100 languages with an approximation sufficient to allow human translators to do their work in one-tenth of the time. No human can do that. Or it can recognize correct anatomic landmarks, thus super-speeding (if not automating: I do not believe in AI replacing humans) the magnetic resonance imaging scanning process, including for challenging scans. One in every ten thousand humans can do that, and at a much slower speed.

AI is being applied with some success (although exaggerated by the media) to music composition, meteorology, quantum mechanics calculations, theorem proving, biometric authentication, and much more. Perhaps in twenty years it will drive cars in Paris more safely than taxi drivers can today.

Something different

Now picture one single computer, with zero judgment and no common sense, who embodied all those specialized capabilities: driving, reading instrumental medical tests, translating, playing Chess and Go at Grand Master level, solving math problems, recognizing a million faces, improvising jazz piano. Call it HAL.

Wouldn’t HAL possess some form of unprecedented ‘intelligence’? Did anything like that, human or not, exist before HAL?

I am not saying that HAL would be intelligent like Dirac or Gödel.

Gödel’s proof of the existence of God

To begin with, even in their fields of specialization, they performed way much better than any AI can promise it will ever do. Furthermore, they did possess some common sense: for example, they were both married for 40+ years. And, unlike AI, they did reuse their skills in domains cognitively-distant from one another, like studying biology or law.

What I am suggesting is that, like those geniuses seemed to lack some cognitive features commonly found in ordinary people, likewise HAL could appear intelligent to us despite its limitations.

The geniuses performed with unattainable intelligence in specialized domains. HAL will be doing equally well in domains that are each much more specialized and limited: but they will be many.

We may not have Artificial General Intelligence for centuries, but we may have this thing much sooner.

It’s a Non-Human Artificial Intelligence, a NH/AI. And it will have nothing to do with human intelligence.

So what?

Patently, Artificial Intelligence algorithms don’t understand what they’re doing and lack the common sense needed to make most decisions. However, this may not be the whole truth.

A day may come when they do pretty well so many specialized tasks that they can appear as intelligent to us, like a school kid who is not very smart but memorizes a lot and quickly.

The business or social usefulness of a seemingly intelligent multi-purpose AI is questionable, at least today. But who knows. In, say, 25 years, such an app, or device, or robot could turn out useful.

2020 PAOLO MAGRASSI CC BY 4.0
Disclosure

The views and opinions in this analysis are my own and do not represent positions or opinions of The Analyst Syndicate. Read more on the Disclosure Policy.