The Two (Conflicting) Definitions Of AI

Bill Vorhies

For a profession as concerned with accuracy as we are, data scientists do a really poor job at naming things or at least being consistent in the naming. “Big Data” – totally misleading (since it incorporates velocity and variety in addition to volume). How many times have you had to correct someone on that?

And look back at all the things we’ve called ourselves since the late ’90s. These names don’t describe different outcomes or even really different techniques. We’re still finding the signal in the data with supervised and unsupervised machine learning.

So now we have artificial intelligence (AI), for which there are at least two competing definitions: the popular one and the one understood by data scientists. And that doesn’t even account for the dozens of Venn diagrams trying to describe which is a subset of what, all basically in conflict.

I’m sure by now you’ve heard the old joke. What’s the definition of AI?

  • When you’re talking to a customer, it’s AI.
  • When you’re talking to a venture capitalist, it’s machine learning.
  • When you’re talking to a data scientist, it’s statistics.

It would be even funnier if it weren’t true, but it is. So it’s a worthwhile conversation to go directly at these two definitions and see where they conflict, and where if anywhere they converge.

The popular definition

This definition got underway 12 or 18 months ago and seems to have unstoppable momentum. In my opinion, that’s too bad, since it’s misleading in many respects. Gathered from a variety of sources and distilled here, the popular definition of AI is:

Anything that makes a decision or takes an action that a human used to take or helps a human make a decision or take an action.

The main problem with this is that it describes everything we do in data science, including every technique of machine learning we’ve been using since the ’90s.

As I gathered up different versions of this to distill here, it became apparent that there are four different groups promoting this meme.

  • AI researchers: They’re getting all the press, and they want to claim “machine learning” as something unique to AI.
  • The popular press: They’re just confused and can’t tell the difference.
  • Customers: Who increasingly ask, “give me some of that AI.”
  • Platform and analytics vendors: If customers want AI, then we’ll just call everything AI and everyone will be happy.

The data scientist’s definition

Those of us professionally involved in all these techniques know that a set of new or expanded techniques evolved over the last 10 years. These included deep neural nets and reinforcement learning.

These aren’t radically new techniques, since they grew out of neural nets that had been in our toolbox for a long time, but they blew up on the steroids of MPP (massive parallel processing brought by NoSQL Hadoop), GPUs, and vastly expanded cloud compute.

When you look at these from the perspective of the AI founders like Turing, Goertzel, and Nilsson, you can see these newly expanded capabilities as the eyes, ears, mouth, hands, and cognitive ability that started to add up to their vision of what artificial intelligence was supposed to be able to do.

Data scientists understand that the definition of AI as we practice it today is really a collection of the six unique techniques above – some more advanced toward commercial readiness than others.

Is there any common ground?

It’s narrow, but there is some common ground between these two definitions that’s primarily in the backstory for AI. The popular press has mostly represented that AI is something brand new. But the correct way to look at this as an evolution over time.

I think we all understand that we stand on the shoulders of those who came before. Even as far back as the ’90s, we were building handcrafted decision trees that we called expert systems to take the place of human decision-making in complex situations.

Once you understand that the popular definition wants to include everything that makes a decision, then it’s easy to see the progression through machine learning and Big Data into deep learning.

One place where the casual reader needs to be careful, though, is in understanding what elements of AI are commercially ready. Among the six techniques or technologies that make up AI, only CNNs and RNN/LSTMs for image, video, text, and speech are at commercially acceptable performance levels.

What you may need to explain to your executive sponsors is that these six “true” AI methods are still the bleeding edge of our capabilities. Projects based on these are high-cost, high-effort, and higher-risk.

The conclusion ought to be that there are many business solutions that can be based on machine learning without involving true AI methods. As more third-party vendors create industry or process-specific solutions using these new techniques, this risk will become less, but that’s not today.

For the rest of us, the conflict of definitions remains. When someone asks you about AI, you’re still going to need to ask, “what do you mean by that?”

Learn how machine learning has changed from business buzz to strategic imperative. Explore 5 Lessons from Fast Learners.

This article originally appeared in Data Science Central and is republished by permission.


Bill Vorhies

About Bill Vorhies

Bill Vorhies is editorial director for Data Science Central and has practiced as a data scientist since 2001. Bill is also president & chief data scientist at Data-Magnum. He can be reached at Bill@DataScienceCentral.com or Bill@Data-Magnum.com