Why Robots And AI Need To Be More Like Mr. Spock

Kai Goerlich and Christopher Koch

No one on Star Trek’s Starship Enterprise ever doubted that Mr. Spock was the smartest being on the bridge. Yet rather than simply being admired for his feats of logical thinking, Spock was often chided – and mistrusted – for accentuating his Vulcan reasoning skills while suppressing his more emotional human half.

As artificial intelligence (AI) enters the mainstream, we are confronted with a real-world Spockian dilemma. AI and a growing body of research on behavioral science demonstrate that while nature has given us many capabilities, rational thinking is not the strongest among them. Sooner rather than later, AI will surpass us, obliterating the competitive advantage that made us masters of our planet.

It’s no surprise then, that we already mistrust AI every bit as much as some Star Trek characters mistrusted Spock. When AI-powered self-driving cars have accidents, for example, we assume that there is a bug in the system somewhere that needs to be fixed. That’s because we tend to think that we are all very good drivers –which we are, as long as we’re focused on the road and not texting or posting selfies to the Web, among other irrational acts of driving that made 2016 the deadliest year for traffic accidents in the United States in nearly a decade. More than 90% of those accidents were caused by human error.

The real issue with self-driving cars is not that they are unsafe, but that humans are. Experts predict that if we let the more rational self-driving machines take over, they could save 10 million lives around the world per decade.

The problem is that we expect machines to adapt to our fuzzy behavior rather than recognizing that the actions of machines are as logical as we program them to be. We tend to assume that machines should behave like humans and we anthropomorphize them.

Like rocks, like humans

We easily attribute emotion and human-like intent to non-human entities around us, like rocks and animals. This tendency has long been reflected in art and entertainment, from classical Grimm’s Fairy Tales to Teenage Mutant Ninja Turtles.

Now we’re doing the same thing with machines. Consider your own reactions when this robot is mistreated. We cannot help but empathize.

We’re going to be spending a lot more time with robots and AI in the coming years. Making them anthropomorphic may seem silly to the rational parts of our brains, but our more primitive (and more dominant) emotional brain center demands that they be more like us. According to a report in the Journal of Experimental Social Psychology, humans trust the performance of vehicles more when they have anthropomorphic features such as a name, a gender, and a human voice.

Indeed, anthropomorphism influences the way we design AI and the way we perceive technology, according to Yvonne Förster, a professor of cultural philosophy at the University of Leuphana. Developing intelligent technologies always requires a model or a concept that serves as a blueprint for the design. Our instinct is to create intelligent technologies in our own image, such as the android Ava in the movie Ex Machina.

Bias in – bias out

Ironically, while we need machines to seem more human to trust them, we tend not to question their calculations. We grow up with calculators and spreadsheets that always spit out the right answers to our simple queries. We expect complex AI algorithms to have the same faultlessness.

Yet algorithms are only as correct as the data that we program into them. For example, when experts examined data from the New York City police department, they discovered that officers tended to stop people of color more often than whites, even though actual criminality was similar across the different groups. If the data was left uncorrected, an algorithm would logically, though incorrectly, conclude that crime is correlated to skin color. Only after fixing the errors could the data be used to develop an algorithm that predicts criminal hotspots without discriminating by ethnicity.

Accurate may not be right

However, even if the data is correct, we cannot assume that the algorithm will always come up with ethical answers. That’s because in some cases it’s not yet possible for us mere humans to deduce how complex algorithms arrive at their answers. Researcher Carina C. Zona demonstrates that even when algorithms are technically correct, the outcome may be harmful for humans. In one of her examples, an automated e-mail for new parents advertising birth announcements was sent to a woman whose baby had died. Being correct 99.999% of the time doesn’t help if you’re extremely wrong 0.001% of the time.

Spock was Star Trek’s most popular character because of those rare moments when his inner humanity pierced his Vulcan cool. He could be dismissive of his human colleagues for their irrational emotionality, but when the situation called for it, he could relate to them on their own level.

As we begin a new era of human-machine collaboration, we need to create this same balance of unerring logic, humanity, and trust. Our future depends on it.

This blog is the second in a six-part series on machine learning.

For more on using AI to avoid bias, see How to Avoid the Most Dangerous Barrier to Good Decision Making.

(Image: geraldford on Flickr under Creative Commons 2.0)


Kai Goerlich

About Kai Goerlich

Kai Goerlich is the Chief Futurist at SAP Innovation Center network. His specialties include competitive intelligence, market intelligence, corporate foresight, trends, futuring, and ideation. Share your thoughts with Kai on Twitter @KaiGoe.

About Christopher Koch

Christopher Koch is the Editorial Director of the SAP Center for Business Insight. He is an experienced publishing professional, researcher, editor, and writer in business, technology, and B2B marketing.