Values Build Trust: The Universal Declaration Of Human Rights Secures The Ethical Use Of AI

Daniel Schmid

The potential benefits of artificial intelligence (AI) are impressive. The technology will enable us to master economic, social, and environmental challenges that have long seemed impossible. AI already supports us in healthcare, disaster protection, and the fight against poverty: For instance, Stanford University reports that scientists have used machine learning to create a mapping technique that “identifies global poverty zones by comparing [millions of] daytime and nighttime satellite images.” The lights at night “provide an excellent proxy for economic activity,” they report. With this information, Stanford’s innovative AI technique can “map the dimensions of distressed areas […] in hard-to-reach corners of the world.”

But as large as these benefits are, we also need to deal with the potential risks of AI. One such risk is unconscious bias in the training material for AI. This can cause the software to discriminate without us being aware of it. If an applicant with certain personal characteristics no longer receives invitations to job interviews or a business owner from a poorer part of town is no longer able to get a loan, these people will justifiably begin to feel left behind.

The ball is in the IT industry’s court. Although society is just beginning to discuss the ethics of AI, innovators need guidance now to safeguard their work. As the international “codification” of human dignity, the Universal Declaration of Human Rights can guide us as a “north star.” No other value-based guidance offers such potential to gain and strengthen employees’, investors’, customers’, and society’s trust in how we use AI. If we can demonstrate that our AI technology respects human dignity, then we will be able to reduce the concerns that people often have about machine learning and artificial intelligence. Respecting human rights is therefore a non-negotiable part of AI governance. So is promoting human rights, for example by AI contributing to the United Nations’ Sustainable Development Goals (SDGs), but I’ll come back to this later.

Augment human talent

Let us first take a closer look at what putting the respect and promotion of human rights at the heart of your business means. “SAP considers the ethical use of data a core value,” said CFO Luka Mucic on September 18 when SAP released its Guiding Principles for Artificial Intelligence. “We want to create software that enables the intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent.”

The guiding principles will help SAP enable business beyond bias, maintain transparency and integrity, and uphold quality and safety. In developing AI software, we will remain true to our Human Rights Commitment Statement, the UN Guiding Principles on Business and Human Rights (UNGPs), laws, and widely accepted international norms. Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavor to prevent the inappropriate use of our technology.

According to the UNGPs, human rights due diligence is an ongoing process that serves “to identify, prevent, mitigate, and account for how [a company] addresses its adverse human rights impacts.” This involves four key steps: “Assessing actual and potential human rights impacts, integrating and acting on the findings, tracking responses, and communicating how impacts are addressed.”

This is especially relevant for AI because of the extremely fast pace at which we innovate in this area. That is why it is vital that we get an outside view on things. To recognize risks from the start, the AI Ethics Advisory Panel works alongside our internal steering committee. With the panel, we’re opening a stakeholder dialogue that the UNGPs also promote.

Know and show

Whether the use of AI is in line with a company’s ethical principles depends on where and how the technology is used. “Humans must decide when to tell machine learning systems to do the work,” says Marc Teerlink, global vice president of SAP Leonardo, new markets, and artificial intelligence at SAP, in a recent blog about the role of humans in the rise of AI. For this reason, our AI Ethics Steering Committee and Advisory Panel primarily look into questions relating to the use of the technology. Ultimately, we can’t just make the technology available and not care about what it is used for. Rather, we want to understand potential use cases for our software before we ship it.

As such, we need to proactively “know and show” how we deal with the potential impacts of AI. One key to achieving this is that we do not only involve product development teams but also sales and consulting in our ethical considerations. These teams are in direct contact with the customer, which makes them best placed to judge how the customer plans on using the software.


But having said that, AI doesn’t just come with risks. In fact, it’s quite the contrary: Most of the potential use cases are beneficial. Tapping into the opportunities and development potential that AI technologies offer is just as important as risk management, for example, by looking at how it can help realize the UN’s 17 SDGs.

The Project Breakthrough website provides impressive insight into capabilities that are already in reach. Maintained by the United Nations Global Compact, the website features over 20 use cases for AI that contribute to achieving specific SDGs, such as:

  • SDG 2 – Zero hunger: Support for farmers’ decision making and increases in productivity.
  • SDG 3 – Good health and wellbeing: Speed up the development of new drugs allowing treatments to reach those who need them more quickly.
  • SDG 4 – Quality education: Provide lifelong learning companions that help the learner to adapt and build new skills throughout their lifetime.
  • SDG 7 – Affordable and clean energy: Help consumers to find the most affordable and clean energy deals by using AI agents.
  • SDG 11 – Sustainable cities and communities: Orchestrate and optimize traffic flows and energy allocation within cities.
  • SDG 13 – Climate action: Improve actions to mitigate climate change through better modeling and analysis of large, complex data sets.

We could continue adding to this list for quite some time. And almost every new idea increases the number of connections to our software solutions. Since billions of people are already touched by our software every day, we see it as our duty to ensure AI development is sustainable. With the Universal Declaration of Human Rights behind us, we align AI with our vision and purpose, which is to help the world run better and improve people’s lives.

Learn more about SAP’s guiding principles for artificial intelligence (AI) here.  

Daniel Schmid

About Daniel Schmid

Daniel Schmid was appointed chief sustainability officer at SAP in 2014. Since 2008, he has been engaged in transforming SAP into a role model of a sustainable organization, establishing mid- and long-term sustainability targets. Linking non-financial and financial performance are key achievements of Daniel and his team.