AI In The Financial Sector: The Ethics Of Algorithms (Part 1)

Daniel Schmid

Part 1 in a 2-part series exploring how sustainability management can promote trust in the use of artificial intelligence

The market potential of artificial intelligence (AI) is enormous. The German gross domestic product (GDP) is expected to rise 11.3% by 2030, simply because of AI, which translates to a sum of around €430 billion. The consulting firm PwC predicts that the greatest development opportunities lie in the healthcare, automotive, and financial sectors, while the banking sector is also regarded as a top growth candidate. For example, the Boston Consulting Group has calculated that AI could increase the operating income of the world’s ten largest financial institutions by up to US$220 billion a year.

A broad spectrum of potential applications

A look at possible fields of application reveals a particularly broad spectrum, especially in the financial sector. This includes customer service and marketing, as well as asset management, portfolio management, treasury, and securities trading – for example, through the automated optimization of equity portfolios. In addition, there are potential efficiency gains in IT security and compliance management. In both areas, AI holds great promise for keeping pace with rapidly growing requirements.

Against this backdrop, however, we must not forget that AI – as with all new technologies – poses considerable risks that need to be carefully managed. This is particularly evident in securities trading, whose AI history dates back to the 1990s. Even then, fund management companies were already implementing automated calculation models that analyzed large amounts of information in real time in order to exploit even the smallest imbalances in the markets. The difficulty of controlling the resulting high-frequency trading became acutely apparent in May 2010, when the Dow Jones fell by more than 1000 index points on the New York Stock Exchange within a few minutes.

Statistical correlations and insidious bias

In contrast to human judgment and the weighing of facts, AI is based on statistical relationships that are detected using large amounts of data and then applied accurately, reliably, and uniformly. This results in risks in areas where the undesirable consequences of purely algorithmic decision-making are much less obvious than in a flash crash on the stock market. A good example is human bias skewing the data – whether consciously or unintended – leading to forms of discrimination that can develop, absent a way to reconstruct or even detect the motives. When groups of applicants with certain biological, biographical, or socio-cultural characteristics are no longer invited to job interviews, or tradespeople from poorer neighborhoods are rejected for loans, a feeling of powerlessness quickly spreads among those affected.

But is this perception justified? Is the current regulatory and supervisory framework no longer sufficient to steer AI solutions in the direction desired by the national economy and society? Clearly, this question cannot yet be answered. This is illustrated by an extensive practical study presented by the German Federal Financial Supervisory Authority (BaFin), together with the Boston Consulting Group, the Fraunhofer Institute for Intelligent Analysis and Information Systems, and the consulting firm Partnerschaft Deutschland, in the summer of 2018.

The study provides numerous scenarios for the possible influence of AI and Big Data on the financial market, with suggestions for discourse that will most likely be ongoing. BaFin president Felix Hufeld stated unequivocally in the foreword to the study: “In view of rapidly advancing digitalization, the supervisory authority must repeatedly ask itself whether its practice is keeping pace. The same goes for its tools and regulation.”

Exploratory exchanges with industry, research, and other regulatory authorities are therefore still in full swing. Nevertheless, in a press release, BaFin points out to the corporate world that “management can neither automate nor outsource its responsibility. Complex models must not lead to opaque decisions and stand in the way of proper business organization.”

Where the ball lies: with the companies

Thus, the ball is squarely in the court of business and industry. Sitting on one’s hands and waiting for guidelines from government and regulatory authorities is not an option. On the contrary: It is precisely because state institutions are still in the midst of their discovery that proactive initiatives are in order. Since massive impact is expected in all areas of business, AI governance will become a top priority for sustainability management – especially as more and more investors are beginning to ask how companies intend to manage the opportunities and risks of AI.

Part 2 of this series will lay out key concepts and a clear course of action in addressing deployment of AI.

This article originally appeared in Börsen-Zeitung and is republished by permission.

Follow SAP Finance online: @SAPFinance (Twitter)  | LinkedIn | FacebookYouTube


Daniel Schmid

About Daniel Schmid

Daniel Schmid was appointed chief sustainability officer at SAP in 2014. Since 2008, he has been engaged in transforming SAP into a role model of a sustainable organization, establishing mid- and long-term sustainability targets. Linking non-financial and financial performance are key achievements of Daniel and his team.