Part 1 in a 2-part series
In this post, I continue to touch upon the topic of machine learning, but now more within the context of edge computing. Examined simply through the lens of a single lab, there is a plethora of project work occurring across multiple industries generating critical data through asset-intensive remote operations. Here, the goals and objectives of digital transformation include how to optimize operational integrity pairing the edge and the cloud.
The edge and the cloud for remote operations
With the continuous surge of Industrial IoT (IIoT) data – both raw and processed – driving formation and implementation of all digital business processes, the need today is for compute to be as close to where the data originates as possible. This is achieved through edge computing and local processing of the data that matters most. It offers the chance for process industries to improve end-to-end operational integrity for remote operations requirements in real time. The goal is to remedy asset issues, keep workers safe, and persistently and correctly abide by industry, environmental, and other government regulations.
By monitoring the assets at the edge, customers reduce operating costs and downtime and can dispatch repairs or replace equipment components before they fail. When you consider an upstream oil and gas operation like offshore drilling, real-time data and what it can tell you is critical to operational integrity. This is where things like packet delays can be disruptive to the business or demonstrably harmful to both assets and workers. Remote operations in oil and gas represent a fast-paced, decision-driven environment ready to benefit from better data and the advanced analytics capabilities that can make sense of it.
The ability to take local action with better data
Remote operations, whether found offshore or in some other isolated wilderness, must be capable and prepared to take local action as necessary even when cut off from the mainland. What these environments require in processing critical data originating at the edge does not mean compromising the benefits of cloud computing. Some argue that provided you can readily act on the data most critical to a remote operation in real time, this is the maximum value of the data collected and that once acted upon, it then can be discarded.
With immediate value obtained from the data first processed at the edge, it then allows IT/OT network managers more backhaul options to move edge data to the cloud. It is ultimately best for the data originating at the edge to move to the cloud, where it can be widely accessed and take advantage of other integration services to serve many applications. There is, for example, a significant role for ERP in IIoT, provided companies and their edge and cloud providers can demonstrate the ability to orchestrate operational and business processes seamlessly across multiple applications, platforms, and networks.
Cloud capabilities factor in where you perform Big Data analytics on the corpus of data generated representing your critical equipment located in important geographic regions, disconnected from centralized business systems. It is the cloud where you most effectively train the machine learning algorithms you expect to deploy at the edge. There is a need for edge computing at each remote operation, but the cloud is where you bring the relevant edge data from something like multiple rigs (multiple edges) deployed in the Gulf of Mexico.
Edge computing is essential for optimizing industrial data at every aspect of an operation pertinent to operational integrity. With effective edge computing, remote sites act upon the data that matters to a location’s real-time situation and how its business processes are optimized to act on insights gleaned from collected data.
The additive value of cloud
Does a firm need to collect and store all edge data? This may remain debatable over the foreseeable future relative to dimensions like data value, edge-data storage costs, or moving data. Yet this is where cloud capabilities factor in. This is where centralized computing power integrates Big Data originating from all remote locations and their networks to provide insights into operations. It is the cloud where you most effectively train the machine learning algorithms you deploy at the edge. There is immense value in your ability to learn from data originating from all remote locations. Machines and systems in any remote location can learn and become optimized from what is learned from other edge data.
If you wish to consume some solid knowledge about edge computing and cloud, there are many links to click and sources to draw from. My intent is to describe some current and ongoing project work that illustrates the most important dimensions of edge computing and cloud working together to meet the operational integrity needs of remote sites in process industries.
In Part 2, I will introduce you to an SAP Co-Innovation Lab project focusing on connected assets for asset health monitoring and maintenance. The focal point of this multi-phase co-innovation project seeks to enable persistent and accurate operational visibility at the edge for both headquarters and on-site operations. It aims to demonstrate real-time situational awareness and “insight to action” for workers at the point of work execution in remote regions.
For more insight on emerging technology, see Smarter Edge Industrial Manufacturers Need To Serve The Segment Of One.