Go, AlphaGo And The Reach Of Machine Learning

Pierre Leroux

Nineteen years after Big Blue defeated chess master Garry Gasparov by winning 3 ½ – 2 ½ (2 wins, 1 loss and 3 draws), history is repeating itself. This time, it involves the game of Go and AlphaGo, and a team of researchers from Alphabet taking on the South Korean master Lee Sedol. The AlphaGo team has claimed victory in 4 out of 5 games and has accomplished the unthinkable one more time—the machine is better than the human at playing a game with billions of possibilities. (Read this excellent article for more on the match from Wired magazine.)

Beside the obvious excitement of pitting man vs. machine, why should you care about the game of Go and AlphaGo?  The AlphaGo algorithm is powered by a combination of techniques including machine learning that allow him to learn millions of moves and how to play the game.

But machine learning is not restricted to the realm of old strategy games. It has quietly made its way into many enterprises and proven itself to be very valuable at beating humans in use cases such as  identifying most-likely customers for a given offer, potential trouble with production assets, fraudulent transactions, etc.

What was once the exclusive domain of researchers and scientists, machine learning is now becoming more and more accessible to enterprises. How? By borrowing concepts from the modern factory such as automation to dramatically improve the productivity of people and accelerate the output delivery—in this case predictive insight.

Similar to the modern factory with its myriad of machine tools, new predictive analytics solutions also provide “tools” to handle many types of data mining functions: classification, regression, clustering, time series, associative rules, attribute importance. They share a streamlined user experience and can be used by most armed with some analytics expertise and a good sense of the business domain they are examining.

What’s so unique about an algorithmic automation framework?

The idea behind this framework is to provide the best compromise between quality and robustness. Quality is about minimizing the error the system is making on the training data—that is past data selected to train the model. At SAP, we call this indicator KI. Of course, the smaller the empirical error is, the better it is, but that’s not the only error that matters.

You also need to consider the extra error that comes from using the predicated model on new data— that’s robustness. There is no consensus on the notion of robustness and how to express it, but it doesn’t mean you should select models solely based on how well they performed against training data sets.

At SAP, we have developed a proprietary robustness metric called KR. The KR metric looks at the variability of the quality metrics on parts of the training data set. It should be viewed as a good proxy to “see” if your model will perform well with new data sets and attain a better compromise between quality and robustness.

For more on the topics machine learning, quality and robustness, and predictive analytics, download the free white paper, Machine Learning Automation: Beyond Algorithms.

 


Pierre Leroux

About Pierre Leroux

Pierre Leroux is the Director of Predictive Analytics Product Marketing at SAP. His areas of specialty include Data Discovery, Business Intelligence, Cloud applications, Customer Relationship Management (CRM), and ERP.