Business intelligence (BI) tools first appeared on the enterprise technology scene several decades ago, at birth clumsy and difficult to use but ultimately improving the flow of data through organizations from their operational systems to decision support. Data warehousing cut the time it took to access data, but even at their full maturity, BI systems could do little more than produce data and reports in a traditional organized way. The rules-driven software wasn’t actually providing intelligence at all.
But with the advancement of artificial intelligence and—more importantly—machine learning, true business intelligence is actually on its way to the enterprise. Such self-learning software will run on servers, be built into bots, drive decision-making systems, be embedded into cars or aircraft, and become the beating heart of mobile devices.
Increased data-processing power, the availability of big data, the Internet of Things, and improvements in algorithms are converging to power this actual business intelligence. To be clear, this will be an evolution rather than a revolution. There are a number of factors that could limit the progress of machine learning and its integration into business, from quality of data and human programming to cultural resistance. However, the question is when, not if, the BI tools of today become a quaint relic of earlier times and real business intelligence emerges.
Beyond sci-fi AI
Artificial intelligence (AI), a term dating back to the 1960s, is tossed about quite a bit these days. It’s an umbrella descriptor that refers to computers capable of doing things that a human typically would. It’s often inaccurately used interchangeably with machine learning. Machine learning, however, is a specific subset of AI that uses statistical methods to improve the performance of a system over time. Any programmer can write code to develop a program that more or less acts like a human. But it’s not machine learning unless the systems is learning to how to behave based on data. Machine learning comes in several flavors, sometimes referred to as supervised learning (the algorithm is trained using examples where the input data and the correct output are known), unsupervised learning (the algorithm must discover patterns in the data on its own), and reinforced learning (the algorithm is rewarded for penalized for the actions it takes based on trial and error). In each case, the machine is able to learn from data—structured and increasingly unstructured in the future —without explicitly being programmed to do so, absorbing new behaviors and functions over time.
Gartner recently placed machine learning at the height of “inflated expectations” in its report, noting that this emerging capability is two to five years from mainstream adoption. But those immersed in machine learning development are grounded in reality. And the reality is that they are making significant strides. Machine learning mimics human learning; it takes time.
The big advantage machines have over us is that they can handle massive amounts of data, take advantage of ever-faster processing power, and run (and thereby) improve 24 hours a day. Over just the last four years, the error rate in machine learning-driven image recognition, for example, has fallen dramatically to near zero—practically to human performance levels.
Still, every instance of machine learning is different. Just as, for us, learning to play piano is different from learning how to crawl, each instance of machine learning is different. It may take longer for a computer to learn to analyze text than it takes it to recognize the meaning of a furrowed brow.
Machine learning for the rest of us
To learn more about how exponential technology will affect business and life, see Digital Futures in the Digitalist Magazine.
The Digitalist Magazine is your online destination for everything you need to know to lead your enterprise’s digital transformation.
Read the Digitalist Magazine and get the latest insights about the digital economy that you can capitalize on today.
About Kai Goerlich
Kai Goerlich is the Chief Futurist at SAP Innovation Center network. His specialties include competitive intelligence, market intelligence, corporate foresight, trends, futuring, and ideation.
Share your thoughts with Kai on Twitter @KaiGoe.
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.
You can adjust all of your cookie settings by navigating the tabs on the left hand side.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
This website uses Adobe Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.
Please enable Strictly Necessary Cookies first so that we can save your preferences!