In the two previous blogs in this series, I talked about why a consistent approach to ethical questions around human-machine interaction really matters and what some of the key challenges are. Let’s now look at an approach that should be helpful.
Will self-aware, super-intelligent, artificial intelligence (AI) become a reality? Whether it does or not, AI is clearly making more and more decisions. To make sure the results of those decisions made by AI will be beneficial to humans, it would be wise to define “digital ethics.”
We need to proactively root ethical principles in AI early, before it is grows out of control. To do this, we must anticipate the next steps in AI development and the new ethical requirements that relate to it. A vast number of small improvements and “go live” steps for AI-related inventions are to be expected, but we can foresee some major developmental steps as well:
- Isolated and specialized AI in advisory roles: We already have highly specialized AI as advisers without responsibility for any decision or action, like search engines or navigation systems. Digital ethics must ensure that the results conform with human ethical expectations.
- Isolated and specialized decision-making AI: Systems make decisions in specific areas, e.g., traffic control, financials, healthcare. Humans can mostly understand the system’s decisions. To ensure intelligent and humane decisions, digital ethics should cover a simple system of human values such as the value of life.
- Isolated and task-agnostic AI: AI makes decisions taking several aspects of a situation into account. Digital ethics using a generic system of values is required as humans may no longer understand the reason a machine made a certain decision. Most experts think this will become a reality within a few decades.
- Independent and cooperating AI: Several specialized AIs interact to come to a broad variety of decisions, e.g., projects for environmental sustainability. A complex value system addressing all aspects of human society is required.
- Self-organizing AI improving itself: In the future, a dominant system or an AI society independent from humans may appear (“artificial superintelligence”). Digital ethics must be robust enough to ensure the long-term survival of human society, including human values.
How can we proactively suffuse ethics into AI at the different levels of its development? At the beginning, isolated AI digital ethics must consist of ethical rules placed at the very root of the AI’s computation processes, resulting in behavioral patterns in accordance with human ethics, such as security, privacy, legal compliance, and the value of life.
For independent AI, we must have a broader spectrum of decision-making authority, more generic algorithms and value sets, for instance, ethics in human culture, communication, science, etc. The handling of human intercultural challenges and different, maybe contradicting human legislation, needs additional attention.
When AI is supposed to come to decisions based on complex contexts, reasons for departing from ethical rules must be defined as well. Keeping the “hierarchy of needs” of humans and AI synchronized (or at least ensuring they are not contradicting each other) is probably the most relevant task for cooperating, and maybe someday self-organizing, AI. A potential world shaped by AI must still allow people to meet their needs and be smart enough to avoid unintended consequences.
We do not know today what a self-organizing, super-intelligent AI will strive for in the end, but it is up to us to introduce from the beginning a basic set of needs suitable for a machine-based existence that neither impedes nor conflicts with the human hierarchy of needs. But, it’s a good question: What motivates an AI? And, as soon as a superintelligence awakens, humans will no longer be able to change its goals or actions.
Outlook for key elements of comprehensive digital ethics
The development of digital ethics is just beginning. On the AI side, today we have the “Three Laws of Robotics,” defined by Isaac Asimov in 1942 in his science fiction writings. We must expect a similar increase in complexity within digital-ethics-based rules.
Ethical values cannot only be transformed into legislation to govern the everyday behavior of humans and machines; they form the foundation of our democratic order. Keeping this in mind, we can dare to predict some cornerstones of sustainable digital ethics:
- “Freedom” and “integrity” of (human and AI) individuals are two of the highest values
- Any of the present definitions of “equality” might need to be broadened and ready to serve more than one self-aware species
- “Dignity” is a universal right of any being, independent of its evolutionary history
- “Diversity” is helpful to keep social exchange running, and it is an evolutionary driver
Such principles need broad agreement. We need industrial self-commitment as well as global legislation comparable to the Geneva Convention or rules governing genetic research. A first step could be to establish an “AI ethics quality seal” for sustainable development. Companies providing AI-based services need a “Digital Ethics Committee” and should add rules about digital ethics to their code of business conduct. Alignment of ethical values and their implementation is in fact already long overdue.
As artificial intelligence takes hold, the organizations that gain a competitive edge will be those that pay closest attention to The Human Angle.