In my earlier blog, I discussed why digital ethics matter. Here I will address reasons why it will be a long and difficult road to define digital ethics that will ensure quality of life for humans, regardless of the role that artificial intelligence (AI) plays in future.
The need for ethics beyond legislation is already here. There have been cases of children ordering absurd stuff using home assistants. If those “accidental” purchases are extraordinarily expensive, shouldn’t someone or something verify the request with the account owner first? How should a robot companion react in a case when a sick person refuses urgently needed medication?
Nobody knows the degree to which AI will enhance machines with intelligence and self-awareness. If AI can develop its own infrastructure, sustainable digital ethics will be required as the fundamental basis of its community. From a holistic point of view, three complementary ethical frameworks must exist in an AI-permeated world: one for human society, one for the coexistence and interaction of AI-driven machines and humans, and one for a potential AI society.
Concerning ethics for humans, we are far from one overarching global understanding. Different countries have very different views about the death penalty, euthanasia, gender equality, and children’s rights.
Ethics for human-machine interaction must serve aspects of AI-based machines on various development levels. Developers have already started to create AI focused on specialized tasks that work in a very narrow context. An apparently simple question reveals the problem of such “single-context” AI: Ten birds sit on a fence. You shoot one. How many are left on the fence? Obviously, it is simple for an AI to calculate nine. However, there is more than simple math going on here. First, a shot is loud; second, birds fly away when they hear an unexpectedly loud noise. A “multi-context” AI would thus come to a different answer.
For “single-context” AI, it is easy to add task-specific routines that end in a simulation of human-style ethical behavior for certain, foreseeable situations. But just collecting such “island” ethics will not help to create a holistic framework of digital ethics. As we saw in the birds example, aspects from several contexts need to be combined get the right results.
Let’s face it, once we get to the point where machines can out-think us, they might not want us around anymore. That’s why using AI mainly for stock trading, cyberattacks, or autonomous weapons is probably not a good starting point for AI ethics. The same goes for the political interest that some leaders have in AI. Vladimir Putin stated in September 2017, “Whoever becomes the leader in this sphere will become the ruler of the world.” With growing intelligence and responsibilities, an AI that is made to rule – and above all win – will not be interested in mutually beneficial collaboration with humans.
An AI conscience must serve as the ethical reference system for a post-singularity Superintelligence, or society of intelligent machines, if that becomes reality. A “machine society” would be inherently different from a human society. For instance, immortality and the ability to instantly create clones would likely influence the value of a machine’s own existence. Social behavior based on family or leisure activities would not be applicable. Inspiration by religion or philosophy would not exist. In fact, any thinking pattern of a new intelligence will be completely different from any human thought. We should never forget that, especially when movies paint a picture of robots with human-like intelligence. The “intelligent and self-driving” car KITT from the 1982 TV series Knight Rider surely told the truth when it said: “Companionship does not compute.”
To combine all three ethical frameworks – for humans, intelligent machines, and human-machine interaction – and to develop it step by step, a set of congruent basic values will be needed and must be discussed early and frequently. Defining guidelines will not be easy and will challenge people’s willingness to change established points of view, especially because we will need ethical and legal concepts that are more flexible than today’s definitions.
Humans quite often express their needs based on emotions and intuition – implying our ethical values. One characteristic of human emotions is that they change over time and thereby support a variety of decisions, which strengthen the ability to innovate. At least for the first generation of AI, features like emotions may not be relevant, but we should not be too sure about claiming “intuition” as a purely human capability. After AlphaZero defeated Stockfish, which until then had been the best chess software, Gary Kasparow remarked that it had used a human-like approach instead of brute force strategies like other systems before.
So, it looks like even intuition can’t be reserved solely for humans any longer. On the other hand, giving AI such human “characteristics” can be helpful because it brings humans and AI closer together. It sounds far out, but this could be a foundation for the integration and mutual development of humans and AI.
In my next blog, I will discuss a step-by-step approach that will be helpful to achieve digital ethics that ensure a mutually beneficial coexistence of humans and AI.
As artificial intelligence takes hold, the organizations that gain a competitive edge will be those that adapt The Human Angle.