Goldfish Image -- Smarter than AI

Stop confusing AI with Intelligence

Reading time: 2 minutes, 40 seconds.

Artificial Intelligence and animal intelligence aren’t the same thing. Goldfish are smarter, more intelligent than AI, even if they can’t play GO. You can’t play GO either. 

Root cause analysis

There is a fundamental disconnect between how AI researchers and the general public think about intelligence. Google’s Francois Chollet[1] was the first person to express the disconnect clearly for me. The disconnect’s consequences are troubling. The solution is simple, but it will likely be ignored because it takes much of the fun out of AI fantasies. 

Chollet’s observation

According to Chollet, AI researchers judge intelligence in terms of how well a model performs a specific skill-based task, whether it’s identifying pedestrians in video streams or playing chess. The public views intelligence in terms of how well others apply knowledge across fields.

Researchers focus on more powerful but less efficient specificity — the public, on more efficient generalization.

Researchers celebrate the fact they can now throw ever-larger amounts of data and compute cycles at a narrow problem to demonstrate new, higher power skills. Who cares if it’s inefficient[2]? They bask in the glory of making new, more complex models work.

People look at how well people generalize, take experience in one area, and apply it in another. AI researchers only wish they could figure out how to do that.

Meanwhile, when the public hears that a computer can play Go better than a human, they immediately assume those powers can be generalized to any number of other tasks. That’s a really bad assumption.

Watch out for widespread miss perceptions!

In general, outside of AI research, people should recognize that AI is not about creating the kind of intelligence that people, dogs, rats, or field mice exhibit.

But they don’t.

To the public, AI trains itself, gets smarter and smarter, sets its own goals, and generalizes from prior experience. None of which are true.

AI technology knows nothing

The “AI” does nothing on its own. Experts program how the learning algorithm works. The “AI” only does what it has been programmed to do (via code, data, data tagging, explicit models, specified starting parameters and complex training, testing, and operating procedures.) 

There are great things we have been able to do using technologies labeled as “Artificial Intelligence.” They reflect the capabilities of great researchers and programmers. What they create is not as intelligent as a goldfish.

Kill the confusion

We would be better off not using the AI term in common parlance. It’s a canard. Call it scrambled eggs, and you’re just as accurate. Whenever the term AI is used, people start thinking about fictional capabilities, over-generalizing from narrow accomplishments.

Most researchers pursuing a long-term goal of creating “an artificial intelligence” will admit that what they’re doing, while astounding, surprising, and more potent than what we able to do in earlier times, is not really intelligent in the way most people think about intelligence.

Consider a person with a telescope. The telescope gives the person super-human powers of seeing. Is the telescope a super-human tool? No. Is it intelligent? No.

Consider a person with an AI based analytical tool. The tool gives the person super perception powers, detecting previously unseen patterns. Is that tool an intelligent, super-human tool? No. What if the tool is able to predict a future state? That’s just rummaging through prior data to find similar historical patterns. Powerful? Very often, but also imperfect. Intelligent? Pick the goldfish if you want intelligence.

Let’s all ask Siri

“So, would you like some bacon with your scrambled eggs, Siri?”

Siri will respond — until reprogrammed — “This is about you, not me,” a sentence written by a person, a sentence hung off many branches of Siri’s decision trees.

What we need is common sense, not overhyped metaphors. 

Speaking of common sense, one of my favorite authors, Cassie Kozyrkov[3] recently published a piece on how machine learning really works[4]. An 8 minute read, it stays away from the complex math but gives you a real taste for what’s going on:

  • It isn’t magic.
  • It isn’t intelligent, it’s code.
  • The core concepts are embarrassingly simple.
  • Abandon your science-fiction-inspired images of AI

It’s worth the read!

Disclaimer: This post is my own opinion. 

Endnotes:

[1] “The Measure of Intelligence” Franc¸ois Chollet 6 November 2019, https://arxiv.org/pdf/1911.01547.pdf

[2] There has been pushback on the “ever more resources” approach’s insensitivity to inefficiency. But it hasn’t yielded the same crowd stunning results others have achieved.

[3] Cassie Kozyrkov, head of Decision Intelligence for Google.

[4] https://medium.com/hackernoon/machine-learning-is-the-emperor-wearing-clothes-59933d12a3cc

Disclosure

The views and opinions in this analysis are my own and do not represent positions or opinions of The Analyst Syndicate. Read more on the Disclosure Policy.

1 Comment

Leave a Reply