Could Your Self Driving Car Be A Sociopath?

Danielle Beurteaux

Depending on how you feel about driving, the advent of automated cars is either the best idea ever, or a terrible, horrible idea. Improving on the over 33,000 lives lost in fatal car accidents every year? Great, no question. Abdicating control over the vehicle you commute to work in every day? Hmmm.

A piece in the MIT Tech Review, dramatically titled Why Self-Driving Cars Must Be Programmed To Kill, discusses the kinds of decisions that will need to be programmed into driverless vehicles. What should it do in a difficult situation? The lesser of two evils? How will it “decide” to, as the example used in the article, navigate an impending accident?

Internet of Things myths

Of course, the car isn’t “deciding” anything—those choices will have already been made by a series of people working on the levels of the car’s design, long before its tires hit the asphalt.

Just like the concerns about robots and Artificial Intelligence, many (if not most) are based on false hypotheses. Robots won’t save us, but they won’t harm us, either. They’re simply tools.

IoT technology will only be as good as we make it and only as effective as our use of it. That means applying ethics to every IoT advancement.

IoT doesn’t absolve us from responsibility

Car maker Tesla recently enabled its Autopilot in its Model S sedans with a software upgrade. It’s not a fully automated system, which is what Google is developing, but does allow for a driver to disengage, i.e. no hands on steering wheel, no foot on accelerator or brake. That didn’t work out too well for one driver, who posted his near-collision on YouTube.

Not for nothing, the capability is in beta mode and Tesla recommends that drivers keep their hands on the steering wheel.

The coming ethics avalanche

One cyclist’s experience with a Google car at a four-way stop recently tidily demonstrates the issues of automated vehicles. As he was balanced using a trackstand—balancing on his bike with both feet still on the pedals—the Google car didn’t know how to react. Both the car and the cyclist went back and forth for a few minutes before the rider cross through and went on his way.

Which is to say that every situation requires a reaction. Reactions are after the fact. And we’re beginning to come across a lot of these situations for which the ethically optimal outcome has yet to be determined.

If you’re using an IoT-enabled medical device, what happens to your data? Should it be regulated under existing HIPPA laws, which weren’t written with IoT in mind? Should it be disclosed, for example, to a police department looking for murder scene match?

Researchers are wrestling with these ethical questions, including the authors of a paper that argues that experimental ethics is the way forward to making those decisions, which would also persuade drivers to support autonomous vehicles.

Stanford University’s Chris Gerdes, an engineering professor, is engaged in the question of ethics on the front line. He believes there are many issues that still need to be resolved before driverless cars become a reality outside of labs and testing drives, and the question isn’t what’s legal, but what’s safe. Humans who have experience driving have some level (of course, far from infallible) of capability with this; driverless cars won’t unless they’re made that way.

For more on the future of automated vehicles, see Self-Driving Cars: Joyride Or Wrong Turn?


Danielle Beurteaux

About Danielle Beurteaux

Danielle Beurteaux is a New York–based writer who covers business, technology, and philanthropy. Her work has appeared in The New York Times and on Popular Mechanics, CNN, and Institutional Investor's Alpha, among other outlets.