From Farm To Table: Bringing AI Into An Enterprise

Ajay Dholakia

Data is now commonly equated with oil, a natural resource that fuels many aspects of our lives. This notion of fuel got me thinking about another analogy for data: It is the food that we all need to sustain and grow.

Food is a plentiful natural resource that, unlike oil, can be grown in a systematic way. The farm-to-table experience involves growing and harvesting raw food, cleaning and preparing, creating exciting recipes, and ultimately serving a delicious meal. This process has much in common with what we need to do with data when it comes to making the most of it in an enterprise.

What is the farm-to-table process for data? How do we harvest data available everywhere and extract actionable insights from it? Let’s get into the details.

As many of us are starting to understand, data has roots and gravity. Data tends to stay where it is created. It tends to attract additional growth of similar data in its vicinity. In some cases, it is asked to stay within particular boundaries. Enterprises are therefore looking to extract value from all the data they have and are continuing to collect it on-premise. That drives a real need for matching compute capabilities that can extract value. Looking at the analogy of food preparation, this aligns with the use of locally grown and procured food materials.

Enabling data harvesting and preparation

So how do enterprises adopt AI technologies into their business and IT architecture that better enable the harvesting and preparation of data?

There are measured steps that an enterprise needs to take. The training and evaluation of machine learning (ML) and deep learning (DL) models are akin to recipe preparation, while deployment of trained models in chosen applications parallel plating and serving.

Let’s look at the various stages of this workflow. It is always useful to understand the technology building blocks needed at each stage. It begins with the collection of raw data. This involves ingesting, transformation, and storage capabilities. Handling the volume, variety, and velocity of data with a suitable infrastructure is crucial. Structured data goes into databases and data warehouses. Unstructured data can go into Hadoop clusters. More specialized semi-structured data may involve a combination of these repositories. Collectively, these are being incorporated into an enterprise data lake architecture.

Having all the data available leads to extracting and preparing specific data sets that can then be used for training the ML and DL models. This is where a data scientist comes into the picture, bringing appropriate tools to clean, massage, and curate the data and come up with manicured data sets. This is akin to washing, cleaning, and cutting raw food materials and grouping and storing ingredients that go into different recipes.

Taste-testing to select the right models

Model training and evaluation is the other major responsibility of data scientists. This requires an ability to create taste tests, or compute jobs, with various degrees of variabilities in model types, parameters, data sets, and performance metrics. A lot of experimental work is needed to create recipes that please the palate. Flexible and scalable hardware infrastructure, easy-to-use management and orchestration tools, powerful frameworks, and programming environments enable the data science team to select the right models.

Finally, having selected the models, the inference stage can begin. This is where the model is incorporated in the application software that is looking to adopt next-gen analytics to deliver better, actionable insights and drive a more productive decision-making process.

The adoption of AI in an enterprise needs to follow the lessons from past waves of technology adoptions, such as cloud computing (6-8 years ago) and Big Data (3-4 years ago). The key lessons from these experiences have been to avoid doing a science experiment–style, one-off project. More than half, and as many as three-fourths, of such projects failed. The reason for this failure was not just about the technology. Without also driving significant process and culture changes within the enterprise, even the best new technologies end up as mere curiosities that never find broad adoption.

In short: Pick a business function as a target for AI adoption and select the application software that implements and enables it. Selecting a business function is like picking a cuisine to focus on. Selecting an application is like selecting an entrée.

Continuously refining the recipes

Launch a pilot program. Bring in and prepare data. Explore models suitable for selected ML and DL algorithms through training and benchmarking. This is the selecting of recipes and menus. Select best-trained models and offer them up for incorporation in the selected application. This is where the consumption of the results from all the previous stages can begin. People are able to relish delicious meals while they reflect on all the hard work that went into bringing everything together from the farm to the table. Of course, a continuous refinement of the models based on feedback from its ongoing use is necessary to stay on top of an ever-evolving business world.

Having successfully completed the pilot phase and possibly rolling it out into production, the process can then be repeated with the next application. And the next one, and so on.

There are a few more imperatives: Seeking advice and bringing in the necessary skills are crucial. Exploration of ML/DL models can become a project in itself. It is very important to focus on serving the selected application domain that needs AI capabilities.

Cooking it up right

Bringing AI into the enterprise may seem like a daunting task, much like cooking can be for those who don’t know their way around a kitchen. Someone who is just learning to prepare meals might choose to take cooking classes from a chef. Similarly, Lenovo has much to offer to enterprises starting the journey of AI adoption. Starting from a consultative evaluation of a target application area, Lenovo Data Center Group can then provide the infrastructure, associated tools, and ecosystem experience to make your AI journey a worthwhile undertaking.

This article originally appeared in Lenovo Xperience Data Center Newsroom and is republished by permission.


Ajay Dholakia

About Ajay Dholakia

Dr. Ajay Dholakia is principal engineer. master inventor, and chief architect for solutions in Big Data, analytics, AI, and healthcare with Lenovo Data Center Group.