Why Digital Ethics Matter

Guido Wagner and Esther Blankenship

Artificial intelligence (AI) is both exciting and unsettling. What changes will AI have on our minds and bodies, not to mention the world we live in? Of course, there will be positive effects of AI: Self-driving cars mean less time driving in traffic; digital assistants relieve us of many mundane tasks. Then there is the scarier side: automation means the loss of jobs, and this time white-collar workers will be hit as well. But this is not the most important reason why the likes of Stephen Hawking and Elon Musk have warned against the incalculable risks AI poses for humanity. Some philosophical and macro-evolutionary reflections reveal that indeed there is a lot at stake.

The unwritten future of AI

A survey conducted by V. Müller and N. Bostrom in 2013 revealed that 90% of AI experts believe that AI with human-like capabilities will be developed by 2075 at the latest. Based on Gordon Moore’s theory about the exponential growth of computational power and Alan Turing’s definitions about how AI can be identified, Raymond Kurzweil formulated some predictions[1]. He expects a “Technological Singularity” in 2045; from then on machines will construct and build themselves without human involvement.

Filmmakers have, of course, speculated about all this. There are “good-guy” scenarios like the Star Trek universe or the nice robots in Bicentennial Man – where humans are (almost) always in control of the interaction with machines. On the other hand, Hollywood has also created “bad-guy” scenarios in which machines follow their own agenda. Perhaps best-known is the scene from 2001: A Space Odyssey, in which the spaceship’s computer, the HAL 9000 murders one crew member and attempts to do the same to the remaining astronaut. In Terminator and The Matrix, machines are clearly out to do more than save their own skins.

Other movies, like Transcendent, explore transhumanist goals of overcoming human limitations like aging, computational skills, and memory through science and technology. Kurzweil says, “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”

Can there be other post-singularity scenarios? Everything could become crazy, like in the novel Quality Land[2], where algorithms control human life. But let’s trust in some reasonable evolution and have a look at how things developed during nature’s evolution.

Digital Ethics Matter

Beyond extremes

From the first moment after the creation of the universe, things have increased in complexity. Energy, atoms, molecules, unicellular organisms, multicellular life, and finally, self-aware life have evolved from one level of complexity to another, as Kurzweil stated[1]. A second macro-evolutionary driver can be called “game-changing events.” The sudden extinction of the dinosaurs and the oxygen crisis that occurred 600 million years ago were events that had a major impact on evolution.

Will there be more “game changers” in the future? Can there be further evolutionary levels? A short answer: There’s no reason why not. And we need to be aware that there is no guarantee that humans will remain at the forefront of evolution. But maybe humans can offer something to prepare and initiate the next major evolutionary step. Unfortunately, there is no promise that the result of that step needs to be based on humans, or even on biological life.

What we call “artificial intelligence” – a term first used by MIT Professor John McCarthy in 1956 – might require the most complex machinery ever seen, but it could potentially open the door to a new level of evolutionary abstraction. Elon Musk said humans might just be “the biological boot loader for digital superintelligence.”

What can developers and investors learn from such a prospect? Today, the most intelligent algorithms are mainly used to optimize advertisements, financial trades, and autonomous weapons – always with a goal to win. But is that something we should teach a “new intelligence” first? Shaping the future for the best possible outcome is why defining digital ethics is an imperative for us today. In fact, we need the mindset of a parent or teacher. Ethical principles are needed for people and a “new intelligence” to coexist.

The first steps, like the Partnership on AI, are underway. Yet they must be extended and enforced on a global scale. Furthermore, to create ethics that serve not only humans and intelligent machines but especially their coexistence, we need to have a clear picture of which ethics will be required before the technology is developed. It sounds like an epic challenge, and it is. But the journey already began when we started to let machines make decisions, and evolution does not wait.

This blog is the first in a series of 3. continue reading on the SAP User Experience Community to find out about the challenges that we need to address before sustainable digital ethics can be defined and a potential approach for defining sustainable digital ethics.

 

Sources:

[1] Ray Kurzweil, 2012, “How to Create a Mind: The Secret of Human Thought Revealed”, ISBN 978-0670025299

[1] Ray Kurzweil, 2005, “The Singularity Is Near: When Humans Transcend Biology”, ISBN 978-0739466261

[2] Marc-Uwe Kling, 2017, “Quality Land”, ISBN 978-3550050237

 


Guido Wagner

About Guido Wagner

Guido Wagner is responsible for invention projects in SAP Design. He focuses on user experience optimization in a digitalized business environment. Preparing for the future of work through sustainable artificial intelligence that improves the way people live is his passion. Share your thoughts with Guido on LinkedIn.

Esther Blankenship

About Esther Blankenship

Esther Blankenship is a User Experience Evangelist at SAP. She manages and curates the SAP User Experience Community (www.experience.sap.com) - a public site dedicated to the exchange of ideas and knowledge about design and user experience. Visitors can also keep up-to-date on what SAP is doing to improve the user experience of its software.