Machine learning will come of age this year, moving from the research labs and proof-of-concept implementations to cutting-edge business solutions. Along the way, it will help power innovations such as autonomous vehicles, precision farming, therapeutic drug discovery, and advanced fraud detection for financial institutions.
Machine learning intersects with statistics, computer science, and artificial intelligence, focusing on the development of fast and efficient algorithms to enable real-time data processing. Rather than just follow explicitly programmed instructions, these machine learning algorithms learn from experience, making them a key component of artificial intelligence platforms.
Machine learning helps tackle IoT data flows
Machine learning may also help us with a challenge from one of last year’s most buzzed-about technology developments: the Internet of Things. The first generation of Big Data analytics grew up around the flow of information generated by social media, online shopping, online videos, web surfing, and other user-generated online behaviors, according to Vin Sharma, the director of machine learning solutions in Intel’s Data Center Group.
Analyzing these massive datasets required new technologies, flexible cloud computing, and virtualization software such as Apache Hadoop and Spark. It also needed more powerful, high-performance processors that provided the tools to uncover the insights in Big Data.
And today’s IoT-connected networks dwarf the data volume from this first era of Big Data. As devices and sensors continue proliferating, so will the volume of data they create.
For example, a single autonomous car will generate 4,000 GB of data per day. The new Airbus A380-1000 is equipped with 10,000 sensors in each wing. Legacy Big Data technology won’t be able to handle the data created by connected appliances in smart homes, traffic sensors in smart cities, and robotic systems in smart factories.
New and exciting system requirements
Machine learning is key to analyzing the enormous, repetitious volumes of data flowing from vast, always-on IoT networks. While machine learning may seem like science fiction to many, it is already in use and familiar to users of social media and online shopping (Facebook’s news feed relies on machine learning algorithms, and Amazon’s recommendation engine uses machine learning to suggest what book or movie you should enjoy next).
Machine learning systems recognize the normal flow patterns of data present on IoT networks and focus on the anomalies or patterns outside the norm. So from billions of data points, machine learning can separate the “signal from the noise” in vast data flows, helping organizations focus on what’s meaningful.
However, to be useful and effective for businesses, machine learning algorithms must run computations at enormous scale in a matter of milliseconds — on an ongoing basis. These ever more complex computations put pressure on traditional datacenter processors and computing platforms.
To operate at scale and in real time, machine learning systems require processors with multiple integrated cores, faster memory subsystems, and architectures that can parallelize processing for next generation analytical intelligence. These are platforms with built-in analytical processing engines as well as the capacity to run complex algorithms in-memory for real-time results and immediate application of insights.
Processors built for high-performance computing will be in high demand. Machine learning and artificial intelligence will need a lot more power as they begin to connect the dots between IoT data flows and customer engagement for improved sales and outreach.
These processors were traditionally the province of research laboratories and supercomputing challenges, such as modeling weather patterns and genome sequencing. But machine learning platforms will become more and more necessary as IoT networks become larger and more pervasive — and as businesses increasingly base their success on the insights found in machine-to-machine communication.
These processors deliver the performance required for the most demanding workloads, including machine learning and artificial intelligence algorithms. So they will no longer be confined to the rarified environments of supercomputing in research centers and universities, as they increasingly become a requirement for cutting-edge businesses.
For more on future tech, see 20 Technology Predictions To Keep Your Eye On In 2017.Comments