
Stop confusing AI with Intelligence
Reading time: 2 minutes, 40 seconds.
Artificial Intelligence and animal intelligence arenāt the same thing. Goldfish are smarter, more intelligent than AI, even if they canāt play GO. You canāt play GO either.Ā
Root cause analysis
There is a fundamental disconnect between how AI researchers and the general public think about intelligence. Googleās Francois Chollet[1] was the first person to express the disconnect clearly for me. The disconnectās consequences are troubling. The solution is simple, but it will likely be ignored because it takes much of the fun out of AI fantasies.Ā
Cholletās observation
According to Chollet, AI researchers judge intelligence in terms of how well a model performs a specific skill-based task, whether itās identifying pedestrians in video streams or playing chess. The public views intelligence in terms of how well others apply knowledge across fields.
Researchers focus on more powerful but less efficient specificity ā the public, on more efficient generalization.
Researchers celebrate the fact they can now throw ever-larger amounts of data and compute cycles at a narrow problem to demonstrate new, higher power skills. Who cares if itās inefficient[2]? They bask in the glory of making new, more complex models work.
People look at how well people generalize, take experience in one area, and apply it in another. AI researchers only wish they could figure out how to do that.
Meanwhile, when the public hears that a computer can play Go better than a human, they immediately assume those powers can be generalized to any number of other tasks. That’s a really bad assumption.
Watch out for widespread miss perceptions!
In general, outside of AI research, people should recognize that AI is not about creating the kind of intelligence that people, dogs, rats, or field mice exhibit.
But they donāt.
To the public, AI trains itself, gets smarter and smarter, sets its own goals, and generalizes from prior experience. None of which are true.
AI technology knows nothing
The āAIā does nothing on its own. Experts program how the learning algorithm works. The “AI” only does what it has been programmed to do (via code, data, data tagging, explicit models, specified starting parameters and complex training, testing, and operating procedures.)Ā
There are great things we have been able to do using technologies labeled as āArtificial Intelligence.ā They reflect the capabilities of great researchers and programmers. What they create is not as intelligent as a goldfish.
Kill the confusion
We would be better off not using the AI term in common parlance. Itās a canard. Call it scrambled eggs, and youāre just as accurate. Whenever the term AI is used, people start thinking about fictional capabilities, over-generalizing from narrow accomplishments.
Most researchers pursuing a long-term goal of creating āan artificial intelligenceā will admit that what theyāre doing, while astounding, surprising, and more potent than what we able to do in earlier times, is not really intelligent in the way most people think about intelligence.
Consider a person with a telescope. The telescope gives the person super-human powers of seeing. Is the telescope a super-human tool? No. Is it intelligent? No.
Consider a person with an AI based analytical tool. The tool gives the person super perception powers, detecting previously unseen patterns. Is that tool an intelligent, super-human tool? No. What if the tool is able to predict a future state? That’s just rummaging through prior data to find similar historical patterns. Powerful? Very often, but also imperfect. Intelligent? Pick the goldfish if you want intelligence.
Let’s all ask Siri
āSo, would you like some bacon with your scrambled eggs, Siri?ā
Siri will respond ā until reprogrammed ā āThis is about you, not me,ā a sentence written by a person, a sentence hung off many branches of Siriās decision trees.
What we need is common sense, not overhyped metaphors.Ā
Speaking of common sense, one of my favorite authors, Cassie Kozyrkov[3] recently published a piece on how machine learning really works[4]. An 8 minute read, it stays away from the complex math but gives you a real taste for what’s going on:
- It isn’t magic.
- It isn’t intelligent, it’s code.
- The core concepts are embarrassingly simple.
- Abandon your science-fiction-inspired images of AI
It’s worth the read!
Disclaimer: This post is my own opinion.Ā Ā
Endnotes:
[1] āThe Measure of Intelligenceā FrancĀøois Chollet 6 November 2019,Ā https://arxiv.org/pdf/1911.01547.pdf
[2] There has been pushback on the āever more resourcesā approachās insensitivity to inefficiency. But it hasnāt yielded the same crowd stunning results others have achieved.
[3] Cassie Kozyrkov, head of Decision Intelligence for Google.
[4] https://medium.com/hackernoon/machine-learning-is-the-emperor-wearing-clothes-59933d12a3cc
AI Market Penetration Reports - Incredible Sample Bias and Deception
[…] the record: I cited PWC’s CEO survey data on AI market penetration in this earlier article. Five percent (US enterprises) and ten percent […]
No ethics for AI ā Software creators need ethics - The Analyst Syndicate
[…] the Analyst Syndicate founder, wrote a piece recently that can be used to illuminate the point. In Stop confusing AI with Intelligence, he advises […]