Big Data analytics, machine learning, and other emerging artificial intelligence (AI) technologies have, in a very short time, become astonishingly good at helping companies see, and react to, patterns in data they would otherwise have missed. More and more, however, these new patterns carry difficult ethical choices. Not every connection between data points needs to be made, nor does every new insight need to be used. Consider these embarrassing real-world examples’:
- One company sent “congratulations on your new baby” announcements to women who weren’t ready to reveal their pregnancy.
- Another company disproportionately targeted ads implying that the recipient had an arrest record toward people with names suggesting they belonged to minority ethnic groups.
- A ride-hailing company showed guests at a corporate party records of customers who had traveled to an address not their own and late at night, and then to their own homes early the next morning, with a nudge-and-wink suggestion of what they might have been doing in between.
If those issues aren’t complex enough, there’s the so-called “trolley problem” facing engineers working on self-driving cars: what do they instruct the car to do in an accident situation when every possible outcome is bad? (For a sense of how difficult this task might be, visit The Moral Machine, an MIT website that lets you choose among multiple scenarios a self-driving car might encounter.) How they will make these decisions is, to put it mildly, a difficult question. How society should react when machines start to make life-changing or even life-ending choices is exponentially more so.
Guilty until proven innocent?
We can’t expect AI to know right from wrong just because it’s based on mathematical equations. We can’t even assume it will prevent us from doing the wrong thing. It turns out it’s already far too easy to use AI for the wrong reasons.
It’s well known, for example, that students often struggle during their first year at college. The University of Texas in Austin implemented an algorithm that helps it identify floundering freshmen and offer them extra resources, like study guides and study partners. In her book, O’Neil cites this project approvingly because it increases students’ chances of passing their classes, moving ahead in their field of study, and eventually graduating.
But what if a school used a similar algorithm for a different purpose? As it turns out, one did. In early 2016, a private university in the U.S. used a mathematical model to identify freshmen who were at risk of poor grades — then encouraged those students to drop out early in the year in order to improve the school’s retention numbers and therefore its academic ranking. The plan leaked, outrage ensued, and the university has yet to recover.
This may be uncomfortably reminiscent of the 2002 movie Minority Report, which posited a future world where people are arrested proactively because computers predict they’ll break the law in the future. We aren’t at that dystopian point, but futurists, who make a career out of speculating about what’s coming next, say we’re already deep in uncharted waters and need to advance our thinking about the ethics of AI immediately.
Current thinking, future planning
There’s no way around it: all machine learning is going to have built-in assumptions and biases. That doesn’t mean AI is deliberately skewed or prejudiced; it just means that algorithms and the data that drive them are created by humans. We can’t help having our own assumptions and biases, even if they’re unconscious, but business leaders need to be aware of this simple truth and be proactive in addressing it.
AI has enormous potential, but if people don’t feel they can trust it, adoption will suffer. If we simply avoid the risks, we also lose out on the benefits. That’s why businesses, universities, governments and others are opening research and engaging in dialog around AI-related concerns, principles, restrictions, responsibilities, unintended outcomes, legal issues, and transparency requirements.
We’re also starting to see the first explorations of ethical best practices for maximizing the good and minimizing the bad in our AI-infused future. For example, a fledgling movement is emerging to monitor algorithms to make sure they aren’t learning bias, and what’s more, to audit them not just for neutrality, but for their ability to advance positive goals. In addition, there’s now an annual academic conference, Fairness, Accountability, and Transparency in Machine Learning (FATML), launched in 2014 and focusing on the challenges of ensuring that AI-driven decision-making is non-discriminatory, understandable, and includes due process.
But making machine learning more fair, accountable, and transparent can’t wait. As the AI field continues to grow and mature, we need to act on these steps right away:
First, we must think about what incentives AI algorithms promote, and build in processes to assess and improve them to ensure they guide us in the right — by which we mean the ethical — direction.
We must also create human-driven overrides, avenues of recourse, and formal grievance procedures for people affected by AI decisions.
We must extend anti-bias laws to include algorithms. Civilized countries put controls on weapons; when data can be used as a weapon, we need governmental controls to protect against its misuse.
Most importantly, we must see the question of AI and ethics less as a technological issue than as a societal one. That means introducing ethics training as part of both formal education and employment training, for everyone from technologists creating AI systems to vendors who market them to organizations deploying them. It means developing avenues through which developers and data scientists can express dissent when they see ethical issues emerging on AI projects. It means creating and using methodologies that incorporate values into systems design.
Fundamentally, AI is merely a tool. We can use it to set ethical standards, or we can use it as an excuse to circumvent them. It’s up to us to make the right choice.
Read the executive brief Teaching Machines Right from Wrong.