Part 1 in a 2-part series.
The adoption of artificial intelligence (AI) and machine learning (ML) continues to develop momentum and is fast becoming a real-world practice being used today in enterprise IT. AI relies on algorithms built by enterprises and consultants build, as well as prepackaged algorithms derived from artificial intelligence-as-a-service (AIaaS) vendors. AI is growing smarter day by day.
Risk and governance issues
At the same time, however, questions about risk and governance are also emerging. The AI revolution has contributed to enterprise executives factoring privacy concerns into their products. In the current climate, determining risk when AI is used in different contexts is crucial. Despite challenges that may be expected of the technologies fueling AI—that is, the widely adopted flavor of machine learning—the enterprise IT market remains as committed as ever to both AI and ML, actively investing time and resources to accelerate development.
For a business leader, there are many privacy issues to consider, with algorithms being used in machine learning on the enterprise side. Privacy concerns are all too familiar in technology companies such as Amazon, Facebook, Netflix, and dozens of other organizations that generate algorithms using facial recognition. Enterprise IT business leaders are aware of the power of facial recognition technology and have been implementing it in novel ways. We already have facial recognition software being used for hiring, lending, and law enforcement.
Privacy concerns and resistance
However, as facial recognition plays an increasing role in security, law enforcement, and more, the privacy concerns are certainly heightening. Technology giant Amazon was asked earlier this year by the American Civil Liberties Union and nearly two dozen additional organizations to stop selling its Rekognition software to law enforcement. The software, which is currently being offered to police departments, has sparked protests from both activists and employees encouraging the company to stop sales.
Studies also indicate that facial recognition may not be accurate. Georgetown University’s Official Report raised serious questions regarding privacy and violations of civil liberties, stating that “half of U.S. adults – more than 117 million people – are in a law enforcement face-recognition network.” The report found that in the United States, one in four law enforcement agencies is able to access face recognition, and that use is almost completely unregulated.
Tesla founder and billionaire Elon Musk expressed a need for tighter AI regulations. In August 2017, Musk tweeted, “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
Yet Dave Kenny, IBM’s senior vice president for Watson and cloud, wrote last year in a letter to Congress: “This technology does not support the fear-mongering commonly associated with the AI debate today. The real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized. We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient, not knowing where to find critical natural resources, or not knowing where the risks lie in our global economy. It’s time to move beyond fear tactics and refocus the AI dialogue on three priorities I believe are core to this discussion: Intent, skills, and data.”
The ongoing debate for accountability is fueling a movement among regulators, vendors, lawmakers, and independent organizations on how companies can regulate algorithms without hurting innovation. The Artificial Intelligence Caucus was even recently created by the United States Congress to further understanding of AI issues.
Part 2 of this series will examine social and cognitive biases and security concerns.