AI In The Real World: What Could Possibly Go Wrong? (Part 2)

Jennifer Horowitz

Part 2 in a 2-part series. Read Part 1.

Part 1 in this series examined different points of view on the risks and opportunities of artificial intelligence (AI). Part 2 explores the topic further.

Social and cognitive biases that are accidentally or intentionally introduced by the human engineers who code the algorithms can be detrimental to the ethics of artificial intelligence (AI). For example, there have been job-screening systems that use AI to suppress female candidates for certain jobs, simply based on historical hiring data. Clearly, the algorithms that code how human decisions should be made are still not immune from gender bias, which may occur in the workplace or the developers’ human values.

Potential consequences of AI as a service

The way AI will automate enterprise IT jobs could be affected by issues such as having a male-dominated workforce designing the algorithms, a lack of female data scientists, or intersectional thinking when developing algorithms. AI as a Service (AIaaS), where a third party offers AI, can also result in economic and social consequences. There may be implications if a true AIaaS offering was available to just anyone. Social tensions and questions of work productivity with humans can be intense within an AI economy.

The next generation of AIaaS companies are poised to raise many different challenges regarding economic policy. AI certainly has made life easier for humans, but if an algorithm goes wrong, results can end in fatalities, lost revenue, racism, and more.

Yet there still is not much overall that can be done to curb every risk and privacy issue. Today, more and more robots and machines can solve problems with highly complex data, learn, perform, and perfect specific tasks. As AI’s growth and innovation spread globally, and as companies continue generating interest based on its initial success, business leaders continue with the hard work of aggressively rolling out AI deployments organization-wide.

Security concerns and employee misgivings

Security concerns when AI is used in applications also have potential unanticipated risks, and in some cases, engineers themselves do not even endorse them. Earlier this year, Bloomberg reported that a group of influential software engineers from Google’s cloud division refused to create a security feature called the “air gap” for the military. An air gap is a network security measure employed through one or more computers ensuring that a secure computer network is physically isolated from unsecured networks. According to Bloomberg, “the technology would assist Google in winning over sensitive military contracts.”

Rebellion among employees has grown tremendously with technology companies in Silicon Valley. Employees expressed their “black box’ method” concerns to Google in a company letter. “This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust,” the employees stated. “This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the U.S. Government in military surveillance – and potentially lethal outcomes – is not acceptable.” Bloomberg last week reported that Google has dropped out of competition for the contract.

Bear in mind that reasonable explanations for all the different stakeholders are critical within the IT enterprise when working with advanced AI algorithms and applications. For developers, automated decision-making can be altogether inscrutable. Within the DevOps environment, many enterprise organizations have moved away from the IT back-room maintenance model into a development cycle of customer-facing apps. High security risks can be replicated with automated tasks and as a result, spread by robots or machines.

Differing points of view: threat or opportunity?

Enterprise organizations are both underprepared and unaware of these types of potential vulnerabilities arising from DevOps. The CyberArk Threat Landscape report suggests that organizations risk having their new apps blocked unless they look at security at code level from the get-go. When purchasing any apps that incorporate AI elements, security should be a top priority. With any layer of cloud, safety and security should be built in. Artificial intelligence, machine learning, and deep learning approaches are all fundamentally different ways to program computers. For the public, it’s only human nature to distrust what one cannot understand, and there is still a great deal of AI and ML models underlying applications that are not yet clear. It’s pertinent that trust remains key.

If business leaders and technology experts have their way, artificial intelligence will likely transform our world. Yet at the same time, these same individuals cannot decipher what the transformational effect will lead to. Choosing sides has proven to be a complicated process. Governance in AI creates many challenges, with regulators fearing our world will be controlled by robots.

Theoretical physicist Stephen Hawking stated that there may even be a “robot apocalypse.” He feared consequences of creating something that can match or surpass humans, believing there is no real difference between achievements by a biological brain and achievements by a computer. AI superiority is a known fear. “It would take off on its own and redesign itself at an ever-increasing rate,” he said to the BBC in 2014. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” At the Centre for the Future of Intelligence AI event, Hawking said, “In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.”

Realizing value with awareness of risks

Enterprises worldwide are just beginning to realize the real value of artificial intelligence. According to Microsoft Azure CTO Mark Russinovich, artificial intelligence remains one of the most promising higher-level machine-learning services. Speaking at GeekWire’s Cloud Tech Summit, Russinovich said, “Companies are taking advantage of AI and ML to automate processes and get insights into operations that they didn’t have before.”

As applications evolve in an AI environment, application and adoption growth of machine learning and AI techniques will deliver greater opportunities. AI algorithms in enterprise IT have the power to create as well as the power to destroy. The potential uses are limitless, but so are the unintended consequences, and mistakes in this space can land any enterprise IT business in the headlines.

For more insight, check out a recent study from the Economist Intelligent Unit that uncovered how some companies are “Making the Most of Machine Learning: 5 Lessons from Fast Learners.”


Jennifer Horowitz

About Jennifer Horowitz

Jennifer Horowitz is a management consultant and journalist with over 15 years of experience working in the technology, financial, hospitality, real estate, healthcare, manufacturing, not for profit, and retail sectors. She specializes in the field of analytics, offering management consulting serving global clients from midsize to large-scale organizations. Within the field of analytics, she helps higher-level organizations define their metrics strategies, create concepts, define problems, conduct analysis, problem solve, and execute.