Should You Trust AI To Influence Business Decisions And Drive Automation?

Sebastian Wieczorek

Innovation is the cornerstone of progress, but as Alfred Nobel’s invention of dynamite reminds us, new discoveries always carry benefits and risks. As organizations are infusing business processes with insights from machine learning and intelligent bots carry out autonomous tasks, it is fair to ask: what if the underlying AI algorithms do not work as expected?

The good news is that in many applications of AI, ethical questions like Nobel faced are not that hard to answer, such as with automating an industrial activity that does not deal with personal data nor impose risks to humans (such as vision-guided spraying of paint). In general, the more humans are affected by automated decisions, the higher the potential impact of wrong decision making on humans and the more software suppliers need to care about securing the process and controlling the outcomes. For these human-affecting scenarios, there are quality and safety best practices you can rely upon to prevent and mitigate the risks associated with AI adoption. Let’s look into some of them.

Sound software development processes

AI algorithms and models are software programs, and as with all enterprise-class software, they have to be developed according to best practices. Whether you are acquiring your AI software from an outside vendor or developing it in-house, you need to ensure it is produced with quality-management systems in place to drive standardized development processes. These processes serve as a basis for people to share and apply best practices that can maintain and improve the quality of the software.

Furthermore, in the case of AI, it is highly recommended that two different teams participate in the development effort to prevent one team’s unconscious bias or preconceived ideas to skew predictions toward a specific outcome. For example, if a developer assumes that a given job function would be done by a male employee, their recruiting algorithm could be biased toward characteristics that are most commonly found in men.

Representative and GDPR-compliance data

Choosing the right data for training AI models is probably the most critical step in achieving positive outcomes from AI adoption. If the data you select is biased toward a specific outcome or not comprehensive enough to be representative of the real-life situation you want to address, the predictions AI software generates will most likely be flawed.

Take the example of an application that uses AI to determine risk profiles based on the propensity of natural disasters in a given region. If the data set doesn’t contain enough historical data, the application could erroneously label an area “high risk” because an unusual series of weather events recently struck. Even worse, the opposite could happen for areas that have recently had good luck and are otherwise known as high-risk areas.

Furthermore, you need to be mindful of using data ethically and in compliance with privacy regulations, such as GDPR. For example, if the data you use includes sensitive information about individuals, the predictions derived from such data may expose your company to liabilities and breach your customers’ trust.

It is imperative that you continuously have visibility and control of how data is sourced, selected, and maintained.

Purpose-build development and quality assessment in a real-life context

To be effective, AI software needs to be designed for a well-identified purpose and tested in real-world scenarios. This is essential to firmly validate that the underlying AI models fit the intended use within your environment and that they will perform as expected.

Before rolling out AI software in a production environment, make sure your software provider works closely with your end users and subject matter experts. This is essential to validate the accuracy of the AI-driven recommendations and ensure the overall solution will have a positive impact on your business. It is very likely that your software provider will have to further tune systems’ quality, safety, reliability, and security for ensuring the expected outcome.

Ethical development, distribution, and utilization of AI technology

Like dynamite, an AI model could be used for ethical and unethical purposes. For example, an application could use AI-powered facial recognition to ensure only authorized personnel can enter a testing lab. That same AI technology could, however, also be deployed to track the movements of private citizens illegally.

This means the creators, vendors, and users of AI technology share the burden of responsibility to ensure a good AI algorithm does not fall in the wrong hands. The solution is not simple, but it is important that teams creating AI have an awareness and commitment to doing their part. Furthermore, ethically minded vendors should contractually ask their customers to use AI solutions only for the purpose they have been licensed and encourage employees, customers and partners to be mindful of the tremendous opportunities and risks brought about by AI.

Learn more about building safeguards around the ethical use of AI.


Sebastian Wieczorek

About Sebastian Wieczorek

Dr. Sebastian Wieczorek serves as Head of the Leonardo Machine Learning Foundation, SAP’s technology platform which helps companies to develop and consume AI capabilities. In addition, he is a key spokesperson for AI and member of SAP’s AI ethics steering board which provides company-wide guidelines on how to apply AI in a human-centric way. Sebastian is member of the Enquete Commission on AI of the German Bundestag, board member at Bitkom’s AI working group, member of the steering board of the EU-project SHERPA, which conducts research on ethical questions in regard to the use of AI, and serves as an academic expert and reviewer for the European Commission and the German Ministry of Education and Research. In previous positions at SAP, Sebastian coordinated all startup engagement activities of the SAP Innovation Center Network, including the scouting of companies for partnering and M&A, the mentoring of startups at accelerators like TechStars, Startupbootcamp, Seedcamp and SAP.io, as well as representing SAP at the German Startups Association. While at SAP Research, Sebastian managed EU-funded research projects and led the "Application Engineering Group".