Business intelligence (BI) tools first appeared on the enterprise technology scene several decades ago, at birth clumsy and difficult to use but ultimately improving the flow of data through organizations from their operational systems to decision support. Data warehousing cut the time it took to access data, but even at their full maturity, BI systems could do little more than produce data and reports in a traditional organized way. The rules-driven software wasn’t actually providing intelligence at all.
But with the advancement of artificial intelligence and—more importantly—machine learning, true business intelligence is actually on its way to the enterprise. Such self-learning software will run on servers, be built into bots, drive decision-making systems, be embedded into cars or aircraft, and become the beating heart of mobile devices.
Increased data-processing power, the availability of big data, the Internet of Things, and improvements in algorithms are converging to power this actual business intelligence. To be clear, this will be an evolution rather than a revolution. There are a number of factors that could limit the progress of machine learning and its integration into business, from quality of data and human programming to cultural resistance. However, the question is when, not if, the BI tools of today become a quaint relic of earlier times and real business intelligence emerges.
Beyond sci-fi AI
Artificial intelligence (AI), a term dating back to the 1960s, is tossed about quite a bit these days. It’s an umbrella descriptor that refers to computers capable of doing things that a human typically would. It’s often inaccurately used interchangeably with machine learning. Machine learning, however, is a specific subset of AI that uses statistical methods to improve the performance of a system over time. Any programmer can write code to develop a program that more or less acts like a human. But it’s not machine learning unless the systems is learning to how to behave based on data. Machine learning comes in several flavors, sometimes referred to as supervised learning (the algorithm is trained using examples where the input data and the correct output are known), unsupervised learning (the algorithm must discover patterns in the data on its own), and reinforced learning (the algorithm is rewarded for penalized for the actions it takes based on trial and error). In each case, the machine is able to learn from data—structured and increasingly unstructured in the future —without explicitly being programmed to do so, absorbing new behaviors and functions over time.
Gartner recently placed machine learning at the height of “inflated expectations” in its report, noting that this emerging capability is two to five years from mainstream adoption. But those immersed in machine learning development are grounded in reality. And the reality is that they are making significant strides. Machine learning mimics human learning; it takes time.
The big advantage machines have over us is that they can handle massive amounts of data, take advantage of ever-faster processing power, and run (and thereby) improve 24 hours a day. Over just the last four years, the error rate in machine learning-driven image recognition, for example, has fallen dramatically to near zero—practically to human performance levels.
Still, every instance of machine learning is different. Just as, for us, learning to play piano is different from learning how to crawl, each instance of machine learning is different. It may take longer for a computer to learn to analyze text than it takes it to recognize the meaning of a furrowed brow.
Machine learning for the rest of us
To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.
For more on next-generation business intelligence in the enterprise, see An AI Shares My Office.Comments