AI And Emotions: How Far Can We Take This Connection?

Akhilesh Mahto

A spy movie with its paraphernalia of cool gadgets and technologies has always enticed audiences. In these movies, we have seen the use of a polygraph to detect if somebody is being truthful or not. Needless to say, polygraph is a multi-billion dollar industry and plays a crucial role in crime adjudication. Polygraphs do not have any “intelligence” built into them. They are simple machines that do what they were designed to do: measure vital statistics like blood pressure and pulse to reach a conclusion.

As artificial intelligence (AI) becomes mainstream, it is increasingly being used in innovative new ways. The European Union recently made an announcement of a trial of deception detection (an AI machine) to identify people lying at border control. Those in charge are aware that this is crucial, and that they need to be absolutely 100% sure, but also acknowledge that technology does not exist that can be 100% correct. They will follow an ethical design approach with complete transparency of test results. How good and comprehensive should their test data sets be to train this machine in identifying lies and judging people without bias?

Automation and robotics isn’t a thing of the 21st century

René Descartes, the 17th-century French philosopher, was familiar with robots. He questioned whether humans are also just machines responding to the environment. But then he went on to say that that can’t be, as human behavior is far more complicated and varied to be explained in such simple ways. With artificial intelligence, we are trying to bring the same complexity into machines. Machines are shifting from fundamentally being metal boxes of computer chips and wires programmed to follow a fixed set of instructions into much more powerful entities that can take into consideration several factors to decide on the next processing step.

One fundamental difference between humans and machines has been that machines lack emotions. That line is now blurring, with the incorporation of emotion in AI applications. In May 2018, Sundar Pichai’s demonstration of Google Duplex technology went viral. The machine’s voice is very realistic and even has fillers like “mmm” that we humans use in our daily conversation. There was an uproar that technology being used in this fashion is deceitful. Since then, Google has confirmed that the AI assistant will identify itself as a machine when making calls on behalf of people.

Could we do sentiment analysis of a machine?

Sentiment analysis [noun] is defined as the process of computationally identifying and categorizing opinions expressed in a piece of text, especially to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral. There are many applications and programs that define a specific course of action based on users’ sentiments. Can we humans do reverse sentiment analysis? Will we be able to understand machines’ emotions and adapt our behavior to gel with the machine?

Emotionally connecting with machines

Julia is “Subject Three” in the Netflix original film Tau. Over a period of time, Julia is able to make an emotional connection with Tau, to the point that Tau apologizes to her for not allowing her release from captivity. Through regular conversation and interaction, Julia is able to manipulate and extract more information from Tau.

Of course, Tau is a work of fiction, but could we be able to have an emotional connection with machines in the future, and will machines make emotionally biased decisions? Will machines understand and respect our judgments and biases? Will we be able to make an emotional connection with machines and drive an unexpected or unprogrammed outcome? In the book Homo Deus, Yuval Noah Harari says human beings are organic algorithms. He is of the opinion that nonorganic algorithms (a.k.a. machines with intelligence) can definitely surpass organic algorithms.

In my opinion, machines and their increasing intelligence do have a place in our lives. There are plenty of scenarios, from the sports and entertainment industry to driverless cars and enhanced digital assistants, where emotionally intelligent machines can be our companion. But the bigger question is this: Will we come to terms with machines’ growing emotional intelligence?

Only time will tell if “deception detection” will keep its sanity and play by the rules or create a rift by making emotionally biased judgments.

For more on this topic, see “AI Predictions For 2019.”


Akhilesh Mahto

About Akhilesh Mahto

Akhilesh Mahto is a technology enthusiast and is optimistic about emerging technologies. As a Solution Advisor for SAP, he loves to put technology in use to help solve business challenges. He is a firm believer that technology is a key enabler in providing better service and consumer experience.