Part 1 in a 2-part series on model risk management in the areas of artificial intelligence and decision intelligence for the enterprise. Read Part 2.
In recent years, artificial intelligence (AI) has moved from being fodder for post-apocalyptic fiction to occasionally dystopian reality. The technology has also enabled several valuable advances in consumer services.
To date, AI’s risks and benefits have remained largely limited to consumer-facing applications. And while the security and other unintended and unforeseen risks have affected many individuals and seem challenging to remediate, the causes—like cyberattacks, fake news, and privacy invasion—are increasingly well-understood.
Over the last several years, however, AI and related technologies like decision intelligence (DI) have moved beyond Silicon Valley and Seattle to penetrate a new type of company: the large business enterprise. Companies like MasterCard, Grainger, Ford Motor Company, and others are looking to AI/DI for a competitive advantage. Reflecting this maturation in the market, SAP, for example, has made its launch of SAP Leonardo—specifically targeted towards machine learning in the enterprise—a strategic priority. Built on large data stores and using the latest in fast, high-quality AI methods, these enterprise initiatives by SAP and others are accelerating even more rapidly than the first wave of consumer-focused AI developments.
Data/AI/DI as untapped factors of production
Enterprises now perceive that, with the rise of AI-powered tools, data as a resource is on the same level as natural resources, labor, and capital as a fourth factor of production, which is thus far largely untapped (see below). The opportunities that will be unleashed are extraordinary, as AI and data will power business growth worldwide for decades to come.
This is both incredibly exciting as well as a reason for great caution. Algorithms that are based on data that represent the institutional knowledge of every enterprise on earth have the power to do great harm as well as good. A big part of the reason: That data also represents institutional enterprise biases and business goals.
Applications of AI/DI in the enterprise have far-reaching potential consequences: They can influence public policy, especially around digital governance and disrupt jobs, capital allocation, or employee incentives.
As enterprise companies embrace AI, the potential for harm is greater than it is for consumer applications. Impacts that are initially contained within a particular enterprise can eventually emerge as macro societal consequences.
Many enterprises are as large as all but the very largest nation-states in terms of control of resources, labor, and capital. The decisions they make already directly impact essentially all facets of the lives of billions of human beings. With AI, that influence will be even greater. We can “supercharge” good decisions or bad ones.
Consider, for example, robotic process automation (RPA), an emerging form of business process automation where workers are replaced by AI systems for parts of a process flow. The RPA promise is that automating repetitive and well-defined processes can improve productivity, in part by removing the variable work speeds and errors intrinsic to human labor. This can produce great financial benefits for an organization. The process mapping and data cleansing that accompanies prep for RPA deployment can also pay immediate and long-term dividends.
But a well-considered RPA initiative must include consideration of unintended negative consequences as well. Computers lack the ability to do more than they are told, specifically in this case to recognize an “unknown unknown” negative event happening within a process flow. By removing humans from these processes, we also remove an important checkpoint mechanism to ensure against situations that were not anticipated by process designers or in the backward-facing training data used to build the AI. The additional risk these changes introduce into automated processes must be understood and mitigated.
RPA also introduces societal and social risks. What happens to people whose jobs are automated away by RPA?
This question is not exclusive to RPA. Many AI systems have this same risk structure—a positive benefit that introduces a new risk internal to the organization, as well as a possible negative externality.
The enterprise AI/DI risk model
Beginning with data risks, and moving all the way through the potential of “superhuman” robots, the risks involved in deploying AI and DI into the enterprise can be classified as shown below.
Current view of risk management: data risks
Contrast the model illustrated above with the focus of IT risk management today. By and large, current models and practices focus on risks involving loss of data, whether via corruption, theft, or loss of access.
These risks are increasingly well-understood. While the detection, prioritization, and mitigation of these risks are time- and resource-consuming, this is not, in essence, an opaque problem: We can understand the process and agree on our objectives.
Traditional risk management is defined by industry standards like ITIL and Sarbanes-Oxley. Most risk management methodologies include some version of the following elements:
- Risk management: To define a framework for quantifying risk (along with which ones will be accepted) and the roles and responsibilities involving risk within the organization.
- Business impact and risk analysis: To quantify the impact and likelihood of a risk and to maintain a prioritized list of risks by this net impact.
- Risk mitigation: To determine where risks should be mitigated and to identify risk owners to be held responsible for management of mitigation measures.
- Risk monitoring: To track and correct countermeasures.
As applied to IT, these measures always cover at the least security administration, application change control, data backup and recovery, and the systems development lifecycle. More specific controls apply to situations like development outsourcing, use of open source software, Software-as-a-Service (SaaS), and more.
What is missing from these approaches is attention to the embedded patterns in the data being protected. These patterns emerge when that data is used to build models, and then those models are used to build systems. Model risk management is in its infancy and is a focus today only for financial institutions. We have yet to develop a general model of the new kinds of risks that emerge when data is used to construct AI models.
What are we missing?
Specifically, many enterprise data sets are decades old, and their original creators have long since moved on or passed away. The contents of these repositories were determined by foundational decisions:
- What data should be collected?
- How should this data be stored, arranged, and networked?
- What business outcomes should be targeted?
These outcomes almost certainly did not include using that data set to build an AI model. For this and many other reasons, no collection of data is a perfect model of the world. They all include biases and misalignments that can be insidious and reach far beyond the immediate business outcomes we are seeking.
For example, what if a historical CRM system was used by salespeople who erroneously believed that the best customers were located only in Canada, and they never called on anyone in Mexico? A machine-learning system based on this data would therefore have a Mexican “blind spot.”
Each time a human interacts with the system, whether that’s a CRM agent entering client data or a marketing researcher deciding whom to call, new selection biases like this one are added. And this problem is pervasive. DNA databases are, by and large, “too white.” And medical trials are conducted much more often on men than on women.
Inward versus outward focus in risk management: implications for AI
Even if data were miraculously free of hidden biases, traditional IT risk management focuses inward, on the enterprise assets, processes, economic outcomes. Few risk managers presently include broader societal impacts of business decisions in their risk assessments.
With AI, it will be critical to extend the risk framework to explicitly address these externalities. In terms of the ITIL risk process map, for instance, both external (societal, ecological, etc.) risk identification and management and structural data bias identification and management are missing.
How do design risks impact outcomes?
Consider one example of the impact of such a structural bias. In O’Neil’s Weapons of Math Destruction, the author describes a school system that used an outside vendor to rank teacher performance. A key variable in the system was the difference in performance scores in a teacher’s classroom from one year to the next.
Based on her low ranking in this system, an outstanding fifth-grade teacher named Sarah Wysocki was sacked. It was only years later that she learned that the algorithm used to score her performance did not check for potentially artificially inflated scores from previous instructors. In other words, a factor outside of the algorithm created an unintended consequence. This pattern of un-modeled outside factors pervades many stories of enterprise risk.
There is no obvious single point of failure that leads to the negative outcome. That is because the outcome emerges from a chain of many decision points, and those decision points are not specifically engineered based on a well-thought-out framework.
It is important to note that we cannot address the failure without understanding how those links influence the decision space.
To be continued in Part 2.
For more on this topic, see The Human Side Of Machine Learning.