Has artificial intelligence (AI) lived up to its billing so far? In some cases, there is too much hype, but paradoxically, the potential opportunities and benefits of AI are still, if anything, under-hyped. While there is a lot of noise regarding AI, there’s been a lack of in-depth discussion and analysis of how it’s actually going to transform businesses. What should be holding companies back is a lack of talent, but it’s actually a lack of understanding of what’s possible — particularly at the top of larger enterprises. There is an acceptance that AI will change everything in 10 years’ time, but little appreciation of how it could, and should, impact businesses right now.
What are the barriers to organizations adopting AI technology more quickly?
There are some obvious factors. One is talent; AI is still a small field. According to a recent EY pulse survey, 56% of respondents saw a lack of AI talent as their greatest barrier, a sizable jump from the 36% reported just four months earlier when EY posed the same question to a separate group of technology professionals. The industry is doubling in size each year, but it’s still very small given the expectations.
In addition, there are few good enterprise AI products. The space is still dominated by people coming from a technology background. We need to see more input and influence from business-driven individuals who care about creating something with value or who have a burning problem they need to solve.
Meanwhile, the platform ecosystem is very immature, and existing platforms are principally targeted at data scientists and experimentalists. There’s still little in the way of truly enterprise-grade tools.
Finally, the media tends to focus on the fears associated with AI rather than on the benefits. Consequently, leaders at large enterprises may spend more bandwidth addressing those fears than exploring the opportunities created. The AI community needs to take ownership of this issue and drive conversations that allow business leaders to address and move past some legitimate concerns.
What risks should businesses consider when looking to implement AI?
The biggest risk is non-adoption. Every challenge in business is an opportunity for AI. Adopting AI will require patience and a willingness to learn, and will be complex and lengthy, so firms need to start now. Many early projects will have a low return on investment (ROI) and a limited impact. They primarily provide learning opportunities. But that learning is essential, and the first step on a transformational journey that will touch every business.
Another big risk is talent. The AI community is still very small. This leads to a significant risk of the Dunning-Kruger effect — people believing they know much more than they do — and the risk of over-promising and under-delivering is high.
Bias in ML is potentially a problem — if there’s bias in your data, AI will amplify it unless you specifically put in checks to prevent this from happening. AI systems also make decisions faster, so businesses must develop appropriate risk monitoring and management approaches.
Finally, overregulation and regulators’ lack of understanding about these technologies could cause issues. It is essential that enterprises accelerate their learning and the development of internal controls so they can have informed, educated responses to regulators.
Should businesses take a top-down or bottom-up approach to AI implementation?
Both. Senior leadership should drive a top-down approach while enabling a bottom-up approach. Technologists’ natural inclination is to learn a new technology then look for ways to apply it. That approach is a great way to develop corporate knowledge and experience, but you shouldn’t expect a big ROI initially because you’re focused on learning rather than on identifying key business problems and developing solutions for them.
There should also be a top-down re-examination of the business. Ask the following questions: what is the core value that your business delivers? How can you deliver more of that, better, faster or differently? What are the intelligence gaps that stand in the way of doing that now? Then challenge the technologists to fill those gaps.
AI is not perfect. These technologies will make mistakes, although they will diminish over time, but it’s key to recognize what “good enough” looks like. If you’re running a nuclear reactor, then it must be perfect; if your business makes ice cream, then not quite so. How accurate does a solution need to be to achieve value for the business? Work this out, then think about the technology — not vice versa.
Views expressed are opinions of authors Chris Mazzei and Nigel Duffy. This is an abstract of an article originally published by EY in the “Innovation Matters” series, and is republished by permission.
Want to learn more?
- Video: The impact of technology on the future of work with Chris Mazzei
- Video: Artificial intelligence today … and tomorrow with Nigel Duffy
- Slideshare: Artificial intelligence: the move toward human-like machines
- Video: Global review 2017 with Nigel Duffy
- Article: “How Responsive Organizations Can Drive Inclusive Growth“