Last week I discussed Predictive Models As A Service Versus Training As A Service and highlighted some of the challenges technology executives need to be aware of when making investment decisions. Today, I’ll do the same for the second category of machine learning, which I call “training as a service—micro modeling.”
Training as a service: micro modeling
When we discuss using machine learning to optimize the full life cycle of customers of one company, we’re usually referring to predictive models that are trained on a specific corpus of data available only to that company. For example, there is a specific churn model for each company, tailored to the company’s customer behaviors and competitive context. These systems are very much dedicated to one task in one context and thus can be trained on much smaller data sets.
The first thing to do when training a model on data is the co-location of the data and the algorithms implementation (we call this the predictive modeling engine). It doesn’t take long to see that there are only two possibilities for this:
- Bring the modeling engine to the data with machine-learning libraries. (This is something that the database vendors and the Big Data platform have well understood.)
- Bring the data to the modeling engine. (Most of the predictive and machine learning environments on the cloud are based on this principle. These cloud environments will soon need to be opened to hybrid scenarios in order to make this challenge not a problem but just a design option.)
The second challenge is that we can expect a lot of predictive model training to occur, and this is where automation plays a big role. You can’t expect to have a data scientist underneath each “local” predictive model that you will have to train, so automation for the full life cycle of modeling—from automated data preparation to automated algorithms to automated deployment and control—is very important.
The third challenge is that building a predictive model is never the end of the story. You need to deploy the models into operations. This can usually be done in two ways:
- The easiest way is to give access to the scoring equation to generate output data (output data can be scores, probabilities, estimates, forecasts, segments, or recommendations, for example). This integration is data-centric.
- The more complex integration is to give access to the predictive model itself within the business context at the point of usage by the business user. (This means usually integrated deeply in a business application.) This way, the user can not only use a predefined model to generate scores, but also retrain the models on a specific segment that he or she is allowed to manage. This integration is process- and persona-centric.
The last challenge is that predictive model consumption in such environments is not simply leveraging a scoring equation that can be seen as a “black box” by the line-of-business consumers.
This means that training as a service is not only interested in training predictive models, but also in being able to provide explanations and insights such as:
- The notion of key influencers: which variables/dimensions impact which business metrics/measures on a synthetic level
- Reason codes: When computing the probability that a prospect will become a lead, provide the three main elements that explain, for this particular prospect, why this specific probability is high or low
The bottom line
Predictive and machine learning is a new field for most people, but it comes in multiple flavors that require different solutions and business practices. The good news is that predictive analytics and machine learning solutions are available to provide for every case.
Turn insight into action, make better decisions, and transform your business. Learn how.