Part 2 in a 2-part series exploring how sustainability management can promote trust in the use of artificial intelligence. Read Part 1.
Part 1 of this series examined the projected growth in the use of artificial intelligence (AI) – and the potential risks. In Part 2, we will investigate approaches to responsible deployment.
The overarching guideline: universal human rights
Universal human rights should be the central guideline for action. No other value system offers comparable opportunities to win the trust of employees, investors, customers, and civil society in dealing with AI. Only those who credibly demonstrate that their AI path respects the basic value of human dignity can effectively counteract the often still rather diffuse concerns about the use of self-learning IT systems. Safeguarding human rights is therefore an integral part of the non-negotiable core of AI governance – not least for those companies that operate worldwide and political, legal, and cultural frameworks that can vary greatly from one country to another.
But how do developers and users find out whether their current AI practice actually conforms to human rights? Companies can provide ethical guidelines with a practical orientation. The guidelines should outline the AI challenges that are relevant for value creation and the principles that must be observed in dealing with these challenges, considering their business strategy, business model, and core competencies. In this regard, the financial sector is still in its infancy. Ethics guidelines are currently more prevalent in the technology sector, including international players such as Google and Microsoft in addition to Deutsche Telekom and SAP.
A framework for conscientious AI governance
Whether a technology like AI is consistent with a company’s ethical principles depends first and foremost on its location and purpose. It is not enough to simply make a technology available and then leave its application to the free play of market forces. It is incumbent on AI suppliers to understand specifically what can happen to their systems prior to each new delivery. Against this background, it is extremely important to have sales and consulting on board, in addition to product development. In many cases, these colleagues have the most direct contact with the customer and are the first in the company to be able to assess the intended purposes of the system. In the event that uses contrary to the original intentions become known, watertight contractual arrangements must be made, especially when the external regulatory means are not yet adequate.
With AI, both the technological capabilities and the application scenarios are subject to extraordinarily rapid change. To stay on the ball, companies should set up permanent steering and advisory bodies that meet regularly and on an ad hoc basis when the need arises. At SAP, we have formed the AI Ethics Steering Committee, which brings together leaders from the very areas most affected by AI change. These include corporate strategy, product development, data security/protection, digital transformation, human resources, legal, and sustainability management.
The importance of an outside view
If you want to establish your ethics guidelines and take up new challenges promptly, you also need an outside perspective. Here, too, it is a good idea to set up a regular exchange format. For this reason, we have decided to augment the work of our internal steering committee with the external AI Ethics Advisory Panel. The panel currently consists of five independent AI experts from science, government, and business.
However, all applications of AI are by no means fraught only with risks. On the contrary, the majority of possible deployment scenarios are desirable in every respect. To deal with the value-compliant control of AI involves much more than pure risk management. Equally important is targeting the opportunities and development potential brought by AI technologies. This also applies to the implementation of the UN’s 17 Sustainable Development Goals (SDG).
New opportunities through AI
The fact that economic interests and the responsibilities of civil society can go hand in hand is demonstrated by approaches in the financial sector. For example, the U.S. real estate financier Freddie Mac, together with a research team from George Washington University, demonstrated that the use of AI can inject fairness into the process of granting customer loans. Thanks to AI, the number of loan commitments increased in the applicant groups that had previously received significantly fewer commitments in the course of an analogous award procedure. The indispensable prerequisite for such success is to free the algorithms of statistical prejudices. In this way, solutions are created that promote the social acceptance of artificial intelligence in accordance with the companies’ business objectives.
This article originally appeared in Börsen-Zeitung and is republished with permission.