Artificial Intelligence Without Data Intelligence Is Artificial

Kevin Poskitt

Have you ever watched a robot vacuum cleaner at work? It starts off amusing and becomes progressively more irritating as you watch it miss the one patch of dirt you want it to clean. The promise of artificial intelligence (AI) is much the same. It can automate routine tasks and deliver significant tangible value; but if you aren’t careful, you might spend most of your time repeatedly bumping into the same wall or getting stuck, tangled up in a mess of cables for the 20th time. Unfortunately, there is some evidence that companies are spending more time tangled up than deriving value from AI:

  • 84% of customers are concerned about the quality of data being used to feed algorithms.
  • 86% of enterprises claim they are not getting the most out of their data.
  • 74% say their data landscape is so complex that it limits agility.

As with robot vacuums, the key to good results is to do a little tidying first. Artificial intelligence makes use of complex mathematics and advanced computational power to deliver results, but what’s powering all the fancy math and expensive hardware is data. Data is the lifeblood of artificial intelligence, and without having a good grasp of the management of data, artificial intelligence will fail to yield positive results.

Companies have moved from the traditional on-premises paradigm, storing data in governed databases underneath business applications like ERP, to one where applications are both in the cloud and on-premises. Data is now coming from less structured sources (e.g., social media, blogs, sensors). The result is an increasingly complex landscape for data. This complexity came with a slew of new tools to help manage all the new data types, formats, and locations.

Managing a flood of new data to power AI

As companies tried to keep up with this flood of new data, the idea of the data lake as a single store of all data for later use became popular, giving rise to even more tools and techniques. Soon there was a fracture between the highly governed data of enterprise IT systems and the comprehensive but often ungoverned world of large-scale data lakes and streams of data from blogs, syslogs, sensors, IoT devices, and more. But AI needs to connect to all of this data, as well as image, video, audio, and text data sources. Simply trying to manage all of these connections has required multiple fractured and fragmented tools. Until now.

Comprehensive new cloud solutions scale artificial intelligence across the enterprise by managing three critical things:

  • The data you need, regardless of where it is or what kind of data it is
  • The design of machine learning algorithms with the tools and frameworks your data science teams want to use
  • The deployment of machine learning with cloud containers so IT can rapidly deploy, manage, and automate the end-to-end lifecycle of AI at scale

Artificial intelligence is a team effort that requires coordination and cooperation between:

  • The business users who understand the needs of the organization and its customers
  • The data engineers who understand where the data is located and how it is structured
  • The data science teams who understand how to extract value from that data
  • The IT and DevOps teams who support them

Every member of your AI team should be able to work together for maximum productivity and speed, supported by software that offers built-in tools for governance, metadata management, and machine learning transparency. This approach enables you to be sure that the results of their efforts can be explained, understood, and trusted.

Creating the AI assembly line

Just as the second industrial revolution was driven by the assembly line for physical manufacturing, the fourth industrial revolution will be driven by the AI assembly line: the ability for the creation of AI to be broken down into specialized parts brought together by a business process and automated at scale. This way, organizations can extract maximum value from their data assets and deliver the best experiences to their consumers and clients.

Learn about SAP Data Intelligence, announced at SAPPHIRE NOW, which enables the creation of an AI assembly line for organizations in a trusted and repeatable fashion.

And please join our “Pathways to the Intelligent Enterprise” Webinar Tuesday, June 11, featuring Phil Carter, chief analyst at IDC, and SAP’s Dan Kearnan and Ginger Gatling. Register here.

This article originally appeared on the SAP HANA blog and is republished by permission.


Kevin Poskitt

About Kevin Poskitt

Kevin Poskitt is part of SAP's product management team that is focused on machine learning, data science, and artificial intelligence. He is responsible for leading SAP's next-generation projects in unified machine learning. His experience encompasses more than 10 years in various technology companies ranging from small startups to large software vendors, where he has worked in multiple departments including sales, marketing, finance, and product management. He is a graduate of the University of Toronto with a specialty in economics and finance. He holds a bachelor's of commerce and a diploma in accounts from the University of British Columbia.