The headlines paint a grim picture: Security Robot Drowns Itself in Office Fountain. The accompanying photos are unsettling and bewildering for their anthropomorphic nature – a chilling mix between the utopian aspirations of science fiction and the whodunit of film noir.
Before his untimely demise, Steve (as the errant robot was known) was an office drone of sorts: a security robot assigned to the Washington Harbor complex in Georgetown, where he patrolled the corridors and coffee corners, charming coworkers and pausing for the occasional selfie in social media. A Knightscope K5 model, he was designed to automatically detect threats in the environment using video cameras, thermal imaging, proximity sensors, GPS, microphones, light detections, and ranging. He was thought of as a more reliable option compared to real guards – and he worked for $7 an hour. But one day Steve took a plunge in the office fountain and “committed suicide.” Why did he do it?
More importantly, what does this tech tragedy tell us about working with robots in the future? Should robots be employed at all? If so, which jobs should go to robots?
These questions were explored by two noted futurists on a recent episode of Internet talk radio program Coffee Break with Game-Changers, presented by SAP. Joining moderator Bonnie D. Graham on the panel were Kai Goerlich, chief futurist at SAP, and Gray Scott, futurist at GrayScott.com.
The following are just some of the provocative insights presented during the one-hour show. For more information, listen to a complete recording of the show: “Robots at Work: Whose Job Is It Anyway?”
What do you think really happened to Steve, the security robot that committed suicide?
Gray: I think more than likely it was just a coding error or it could have been a hardware situation where he ran over something and fell over. I do not have the details on the specific case, but I do know that if these machines in the future are programmed for self-preservation and know that water would kill it basically, they would do everything in their power to avoid those situations.
Kai: Robotics is still in its infancy, or human walking robots at least. We know that two-legged animals are really difficult to build, but we nevertheless try to build them according to our shape. I think it was just a mechanical or algorithmic failure. For sure not suicidal, but the question will be if a frustration that something is not working can be somehow felt by an algorithm or by a machine, because this is a classical science fiction idea that machines can feel what we call frustration.
Will robots ever have consciousness? If so, should they be subjected to psychological evaluations?
Gray: As we move into the future, these machines are going to mimic human behavior and human psychology to a degree where we cannot tell the difference between what is human and what is mechanical. Those lines are already starting to blur. As far as the psychological profile, the psychological community is going to have to incorporate this into the DSM, which is the manual of disorders, because you do not want a machine that is caring for your grandmother or watching your baby to have a psychological problem, even if that is a mimicked human behavior. As of right now, it is still a code, an algorithm, but we are hearing whispers all over the place that machines are just now starting to write their own codes and change their codes. What does that mean for a future where a machine may be depressed and it finds itself in a situation where it does not want to follow the orders?
Will people become emotionally attached to their robots and smart machines?
Kai: The tendency that we have as humans is to put our human emotions into other things. I think that with robots acting smart, you just cannot avoid thinking emotionally about them. In the future, we have to learn that these machines are smart acting, but not in the way that we are smart. We can foresee things that might happen, or due to our social glue, act differently from time to time. We tend to not stick to our rules anyway, so we bend them according to our needs and to social environment. This is really tricky.
Gray: Most of what we are going to see in the near future are people embracing these robots, especially humanoid robots, as tools in the beginning. But as they become more human-like, as people add skin to them and as those structures are able to feel heat and cold and pleasure and feel pain, or mimic pain and those types of things, we are going to move towards projecting onto these mechanical things what we are, what we want to be, and what we are afraid of. We have to think about what we are as a species and where we are going, because all of that is going to be reflected in these machines. Technology is a mirror and these machines are going to force us to face ourselves.
Technology is a mirror and these machines are going to force us to face ourselves, says futurist Gray Scott.
What humanity are we looking for through our interactions with robots?
Gray: I think we see this in cultures around the world, especially now with immigration issues – society, culture, national pride, and things like that. Typically, we look at people as the “other” – like someone coming into our tribe and disrupting our tribe: this is ours, this is mine, and there are the boundaries. The world just is not like that anymore. We live in a global society now. As these machines begin to emerge, that is another layer to the “other” effect, and so we have to start to unravel that behavior. Why do we do that? Why do we project our fear? Why do we project our insecurities and our hatred onto the other when the other really, in this case, is just a manifestation of our imagination and of our vision of the future?
Kai: Yes, I think that is very true. We are upon a new renaissance, where again humans are in the focus of what we will do in the future, but due to the possibilities that we have with technology now. I think the analogy of a mirror is accurate. When we discuss what robots may do, we are actually talking about what the value of human life is. What kind of work do we want to do? Do we need to actually work? What about our empathy and creativity – because we are afraid that it is taken away and in the last decades we have not given much thought about it? I found it especially interesting, Gray, your comment about immigration and migration. I have not thought about it, but that is a spooky coincidence that we see lots of backlash on migration and increasing robotization of the world.
When we discuss what robots may do, we’re actually talking about what the value of human life is, says SAP Chief Futurist Kai Goerlich.
Gray: The psychologist Carl Jung would say that this is not a coincidence, that we are all sort of emerging into a new realm of the unconscious becoming conscious. I mean, literally, this is a new species that we are birthing. It is not a coincidence that we see this at the same time that we see all the things that are happening in our world. There is a connection there. Companies that are creating these machines and do not have someone in their company thinking in that way about these machines – they are going to make a lot of mistakes coding them, building them, and implementing them.
What will our purpose as humans be in a future filled with advanced robots?
Gray: I have circled this for a very long time and people are really starting to question this now that they are starting to see their jobs go away, because for a lot of people, their purpose is their work. It is their job. It does not matter if it is driving a truck or going to a factory, a lot of people find their purpose in that, even if it is not fulfilling in a lot of ways.
What will the human purpose be in 2045 if there are very few jobs? Part of what we are starting to see is a migration back to the handmade. We are moving back towards creativity, back towards what humans are really good at, which are the things that machines still cannot do very well.
The purpose, I think is going to shift back to the vision, the dreaming, the idea that we are here to serve each other; we are here on this planet right now to find out what the other is feeling; and we are here to find out what the other is learning and knowing. Most of us find the most joy in our lives typically are in those moments we spend with the people that we love and that we admire. I think that is where we are moving towards. Hopefully, we will not disrupt that movement with bad algorithms and greedy algorithms. That is my hope.
Tune in to Coffee Break with Game-Changers
For more up-to-the minute business and technology news, listen to Coffee Break with Game-Changers broadcast live every Wednesday, 8 am Pacific/11 am Eastern Time on the VoiceAmerica Business Channel. Follow Game Changers on Twitter @SAPradio and #SAPRadio
Panelists’ comments have been edited and condensed for this space.
For more on this topic, see The Human Factor In An AI Future.Comments