Apparently, I’m worthless. I was in the street at the same time as a baby, an old man, and a stray dog, and an autonomous car avoided all of them and swerved into me!
That’s a futurist joke from the New York Times. This situation, popularly known as the “trolley problem,” was tackled by researchers at MIT. The findings were recently published, raising an important question for our autonomous future: How should an autonomous car make life or death decisions? Let’s say a speeding autonomous car encounters a set of people and pets on the road ahead. What should its action be, assuming the car knows it cannot stop in time? Should the car prioritize its passengers over pedestrians, more lives over fewer, kids over adults, women over men, young over old, fit over sickly, higher social status over lower, pedestrians over jaywalkers, or humans over pets? And finally, should the car swerve to hit an adjacent vehicle or stay on course?
Hollywood tried to tackle this in blockbuster fashion in the 2004 film I, Robot. In the movie, a robot faced a conundrum: to choose between saving an adult or a child from drowning. AI calculated the chances of survival and chose to save the adult, even though the adult (probably) might have been able to save himself.
If you go back to the MIT question and try to answer it yourself, you will realize there are no easy answers. Now try asking the person next to you and, if possible, someone from a different culture or cultural upbringing. Who you would try to save varies across the cultural spectrum:
- People from collectivist cultures like China and Japan put greater emphasis on respect for the elderly. That means they are more likely to spare the old over the young.
- People from poorer countries with weaker institutions are more tolerant of jaywalkers compared to pedestrians on the sidewalk.
- People from countries with high levels of economic inequalities show greater gaps between the treatment of individuals with high and low social status.
- Contrary to popular sentiments, the sheer number of people in harm’s way isn’t always the dominant factor in choosing who should be spared.
The good news is that autonomous cars will be able to make this decision in the near future. The not-so-clear parts are what are the parameters for making such a decision, and who gets to decide if the decision is correct? What are the decision makers’ backgrounds, morals, ethics, and cultural significance?
In all the hoopla about the potentials and dangers of autonomous driving, we are missing the big picture and not asking the relevant questions. In fact, almost all artificial intelligence scenarios in the personal domain (leaving out industrial robots) are leading to situations that society has worked hard to eliminate or reduced significantly in real-life:
- Facial recognition seems to be the second favorite (after autonomous cars). After initial successes, when broadly applied, it tends to flag people from certain background as suspicious or even criminal. It just doesn’t seem to look beyond color or social strata.
- Amazon created an AI tool that processes resumes based on hiring patterns from the last 10 years. In a sad reflection of the (poor) gender diversity in tech, the tool was deemed to have a recruitment bias against women and was eventually scrapped.
- In an example of a racist chatbot, it took only 24 hours for people to teach racist utterances to Microsoft’s bot, Tay. This is a classic case of technology following prejudices of society, and even big corporations do not find it easy to manage this.
This can be a simple case of unintended results based on flawed datasets. It is easy to blame the input data (as it mirrors reality). However, both the data and the algorithm processing the data need to be examined. The data input the algorithm needs to mirror the realities of the world – from gender and culture to physical and other attributes. Much like diversity in the real world, the data used to teach the AI needs to be diverse. This data diversity is not just diversity in terms of sources of data, but also types of data. Is the dataset a representative of a particular gender, age group, race, economic background, political views, and so on? Does the data represent the world at large?
But a diverse dataset will take us only so far. The second part of the equation is the algorithm – the brains of the whole operation. It matters how the algorithm is designed and implemented. The algorithms can intentionally or unintentionally bring the prejudices and biases of the real world. They can be programmed to ignore a certain set of data – whether or not they should; or as programmers like to ask: Is it a bug or a feature? Are the algorithms designed for the edge cases or only for the well-to-do members of a society?
It will be ironic if social media sites’ newsfeed algorithms determine that the rumors about AI are quite popular and start showing those articles more in their feeds, leading to another vicious cycle of views, popularity, and panic.
Wait, that might be happening already!
As an AI consumer, the last question to ask is whose data and algorithms are we using? What are their morals, ethics, and cultural norms? Do they match ours? Will they make a decision similar to ours? And are we in danger of losing our humanity by relying so heavily on artificial intelligence?
We will soon be living in the age of AI. Increasingly, it is making the decisions that affect our lives – college admissions, where we work, how our work gets evaluated, whether we get a home loan, how much we pay for insurance, and how law enforcement treats us. In theory, this should lead to greater efficiency, convenience, and fairness – everyone gets judged according to the same rules, and bias gets eliminated. Unfortunately, this is still not a reality. All stakeholders need to work towards making AI ready for the global world and understood, accepted, and adopted by the global world.
The entire premise of AI is to make our world and our lives better. We cannot allow it to bring (or even harden) the prejudices and biases to our future.
A version of this blog appeared earlier on LinkedIn Pulse.
The era of data ethics is upon us – and only the companies with the highest standards will win over global customers: “Can You Keep a (Data) Secret?“