Bringing Transparency Into AI

Dr. Markus Noga

Companies are increasingly using machine learning models to make decisions, such as the allocation of jobs, loans, or university admissions, that directly or indirectly affect people’s lives. Algorithms are also used to recommend a movie to watch, a person to date, or an apartment to rent.

When talking to business customers – the operators of machine learning (ML) – I hear growing demand to understand how these models and algorithms work, especially when there is an expanding number of machine learning cases without humans in the loop. Imagine an ML model is recommending the top 10 candidates from 100 applicants for a job post. Before trusting the model’s recommendation, the recruiter wants to check the results. If the recruiter were checking or retracing a human’s work, they would look for short summaries or clues like underlines, circles, pluses, or minuses around salient elements. This is different for ML models. Depending on the depth of the algorithm’s neural network, there would be limited explanation and transparency. While a human would be able to understand a machine with three or even 30 gears, levers and pulleys, most would struggle to explain something with 300 moving parts.

This black box problem of artificial intelligence is not new, and its relevance has grown with modern, more powerful machine learning solutions and more sophisticated models. Meanwhile, models can outperform humans in complex tasks like the classification of images, transcription of speech, or translations from one language to another. And the more sophisticated the model, the lower its explainability level.

In some ML-enabled applications, the black box issue doesn’t matter because users have no choice if they want to leverage the machine’s intelligence. If more simple and explainable models cannot do a given job (for example translating a text from Chinese to English on a human-equivalent level), the only decision the user can make in terms of explainability is not to use the model at all and translate the text by himself.

In other applications, we don’t care about the transparency of an algorithm. For example, a model that detects the 10,000 most promising customer prospects from a list of millions or picks the best product to recommend to them are examples where humans are out of the loop, because it would be prohibitively effort-intensive to check them all.

Providing explainability for sophisticated machine learning models is an area of active research. Roughly speaking, there are five main approaches:

  • Use simpler models. This sacrifices accuracy for explainability.
  • Combine simpler and more sophisticated models. The more sophisticated model provides the recommendation, the simpler model provides rationales. This often works well, but there are gaps when the models disagree.
  • Use intermediate model states. For example, in computer vision, states in intermediate layers of the model are excited by certain patterns. These can be visualized as features (like heads, arms, and legs) to provide a rationale for image classification.
  • Use attention mechanisms. Some of the most sophisticated models have a mechanism to direct “attention” towards the parts of the input that matter the most (i.e., setting higher weights). These can be visualized to highlight the parts of an image or a text that contribute the most to a particular recommendation.
  • Modify inputs. If striking out a few words or blacking out a few parts of an image significantly changes overall model results, chances are these inputs play a significant role in the classification. They can be explored by running the model on variants of the input, with results are highlighted to the user.

Ultimately, human decision making can only be explained to some degree. It is the same for sophisticated algorithms. However, it is software providers’ responsibility to accelerate research on technical transparency to build further trust in intelligent software.

For more on the ethics of AI, read more about SAP’s guiding principles for artificial intelligence (AI).


Dr. Markus Noga

About Dr. Markus Noga

Dr. Markus Noga is vice president of Machine Learning at SAP. Machine Learning (ML) applies deep learning and advanced data science to solve business challenges. The ML team aspires to building SAP’s next growth business in intelligent solutions and works closely with existing product units and platform teams to deliver business value to their customers. Part of the SAP Innovation Center Network (ICN), the Machine Learning team operates as a lean startup within SAP with sites in Germany, Israel, Singapore, and the United States.