The Rise Of Autonomous Vehicles And Why Ethics Matter

Rudeon Snell

There is little doubt that autonomous vehicles, more commonly known as self-driving cars, are set to be transformational. The market is set to reach roughly $42 billion by 2025 with a compound annual growth rate (CAGR) projected at 21% until 2030. Not only will this disruption be immense, giving birth to new types of businesses, services, and business models, but their introduction will trigger similarly immense ethical implications.

The World Health Organization (WHO) says that approximately 1.35 million people around the world die each year in traffic accidents. If there was an effective way to eliminate 90% of those accidents, I don’t think there’s any doubt that most people would support it. This is the aspirational goal that autonomous vehicles aim to achieve: to eliminate the main source of traffic accidents, namely human error. The benefits of autonomous vehicles are certainly clear – time saved, increased productivity, improved safety, continuous service availability – but the challenges remain, and if they’re not addressed, could potentially wreck the promise this technology holds.

Ethical challenges with autonomous vehicles

One of the key challenges for autonomous vehicles is centered around how they value human life. Who decides who lives and who dies in split-second decision-making? Who decides the value of one human life over another? And, more importantly, how is that decision calculated? Being on a road is inherently dangerous, and this danger means that trade-offs will occur as self-driving cars encounter life or death situations. It is inevitable. And with trade-offs, you need ethical guidelines.

If an autonomous vehicle makes a mistake, it could directly lead to the loss of life. In these scenarios, the questions of who decides who lives and how those decisions are made become very important. Researchers have been grappling with this notion for years. The trolley problem was proposed by the philosopher Phillipa Foot in 1967 and has proliferated into many variants. Generally, it is used to assess what decisions people would make when asked to take an action that would, for example, kill one person vs. killing 10 people.

Let’s assume an autonomous vehicle experiences a mechanical failure, its acceleration increases, and the car is unable to stop. If it continues, it will crash into a large group of pedestrians, or the car could swerve, crash into an obstacle, and kill the driver inside the car. What should the car do, who decides what the car should do, and where would liability rest for that decision? Experienced human drivers have been programmed for years to deal with split-second decisions like these, and they still don’t always get it right.

Moral utility vs. moral duty

Should autonomous vehicles perhaps adopt utilitarian principles when choosing who lives and who dies? Utilitarian principles, as advocated by the great English philosopher Jeremy Bentham, could offer a framework to help with the ethical decisions AIs need to make. The focus would be on decisions that result in the greatest good for the greatest amount of people. However, at what cost? How are utilitarian calculations that violate individual rights reconciled? Why should my life be less important than those of five strangers I don’t know?

Or should autonomous vehicles follow duty-bound principles as advocated by the German philosopher Immanuel Kant? Under this system, a principle such as “thou shalt not kill” would ensure the car is duty-bound to maintain its course even if it harms other people so long as the driver remains safe. With duty-bound principles, prioritizing your individual safety could, in effect, cause even more harm.

I suspect the key lies in how customers eventually adopt and use autonomous vehicles and the services they offer. Would consumers be interested in owning self-driving cars that follow utilitarian principles and potentially kill us to save a bunch of strangers? Or would we only be interested in buying self-driving cars that prioritize our own safety, even though it might mean potentially killing a bunch of strangers? Who decides what ethical guidelines AI in autonomous vehicles will follow? The harsh truth is that ultimately, we as consumers do.

Autonomous vehicles will come to market and they won’t be perfect. We know autonomous vehicles struggle to detect small objects like squirrels. It’s quite possible that they also cannot detect squirrel-sized potholes and rocks that could cause life-threatening scenarios from tire blowouts.

As with most previous technology advancements, consumers decide what technology trend is advanced by technology providers. We do so by voting with our money. If 90% of autonomous vehicle sales are for units that prioritize our lives, potentially at the expense of others, and only 10% of autonomous vehicles are sold following utilitarian principles, guess where the focus will be for future autonomous vehicles? Yes, there might be mitigating alternatives. Autonomous vehicle manufacturers could decide to give consumers the choice of how their vehicle operates. But at what cost? And would that capability be feasibly scalable across regions and cultures?

Consumers’ power to decide

Autonomous vehicle manufacturers could decide that in order to sell their self-driving cars, they need to develop them according to consumer preferences. In the past, regulation has been used to address this type of conundrum, but with the world of technology accelerating at an exponential rate, regulation doesn’t seem to be able to keep up. And even if it did, supply and demand (as usual) will inform the design of future products.

The increased use of autonomous vehicles is ultimately premised on trust. The people who buy or use them have to trust the technology and must be comfortable using that technology for their true value to be realized.  In order to build this trust and acceptance, autonomous vehicle manufacturers must ensure the technology is safe and caters to the needs of their consumers, whatever those needs might be.

In the early 1900s when elevators became autonomous, they made a lot of people very uncomfortable. People were so used to a driver being in the elevator, the idea of an elevator operating autonomously scared them senseless. No one wanted to use them. So, manufacturers invented compromises. Soothing voices were introduced, big red stop buttons were installed. Safety bumpers were introduced, and creative ads helped to dispel fears. Gradually, people accepted the change. Today, we barely give getting into an elevator a second thought.

Waiting until the technology is better is simply not a viable option. There’s no guarantee how long it will take to perfect and, while we wait, millions will continue to die. Progress should be iterative because the technology of today can help save millions of lives. We should not let perfection be the enemy of good enough right now, especially considering the alternative. Is that perhaps the real ethical dilemma we are facing with autonomous vehicles?

An obscured path ahead

There’s also the notion that perhaps we do not have to decide how to value human life. Would it make sense to let AI determine its own outcomes, guided only by baseline moral principles that are globally accepted? Perhaps a set of principles similar to Isaac Asimov’s three laws of robotics? The challenge is that ethics differ, just as values differ. The way they manifest culturally differs across countries and regions. What is acceptable in Japan might not be in Europe. How we value life in the East could be different in the West. This diversity has its benefits, for sure, but it also provides massive challenges when trying to design systems that can function universally.

Today, advanced technology is developed under the purview of the leading technology institutions and not under the purview of governments. An example of this is evidenced in the amount of money spent by the top five U.S. defense contractors on research and development. It comes to less than half of the total research spent by any of the major technology players like Microsoft, Apple, Google, Amazon, and Uber.

A key takeaway is that policymakers do not govern advanced technology in the commercial world; ultimately, technology providers do. Some consequences can be anticipated and are linked to the promises made on behalf of the technology, while some consequences, unfortunately, remain unforeseen.

My hope is that we strive not to be surprised by unintended consequences that could derail the promise that breakthrough technology offers just because we didn’t take a moment to anticipate how we could deal with them. Right and wrong are, after all, influenced by factors other than just the pros and cons of a situation. If we ask the hard questions now, we can drive our world in the direction we want it, not just for us today, but for our children tomorrow.

By combining transformative technologies, from blockchain to Big Data, the auto industry can evolve from selling cars to selling experiences. See what happens when technology, people, and data combine to enable “segment of one” experiences, better predict service needs, proactively engage customers prior to product modifications, and much more. Download the white paper.


Rudeon Snell

About Rudeon Snell

Rudeon Snell currently drives the Intelligent Enterprise Solutions business across EMEA South focusing on helping customers across all industries reimagine their futures, enabled by exponential technologies. A Futurist, Strategist, and Business Transformation leader, Rudeon is passionate about technology’s power to help solve humanity’s grand challenges and move our collective societies forward.