A Defining Moment: The Internet Of Things And Edge Computing

Chuck Pharris

Part 1 in the 3-part “Edge Computing Series”

When it comes to defining IT terms, I say the simpler the better. Let’s start with the Internet of Things (IoT).  The IoT is the network of connected things—like industrial machines or coffeemakers, things with sensors and APIs that enable connectivity and data exchange. Simple enough.

Before defining edge computing, however, we need to understand that the “edge” gets its name from its relation to the core. The core is simply the collection of technologies (housed in a data center or distributed in the cloud) that make up the critical IT and business functionality for any organization. When a business deploys an IoT scenario—say a series of HVAC machines throughout a college campus—the machines are, by definition, deployed at the edge.

Another term bandied about is “edge processing”—which is basically data processing that happens at the edge rather that at the core. This brings us to the question: Why process at the edge?

An answer to a problem

The idea of processing data at the edge is a solution to a practical engineering problem. IoT as a concept has always assumed that connected things would exchange data with the core via the cloud. Problem is, obstacles stand in the way. Some of these involve:

  • Bandwidth: For many IoT deployments, the bandwidth required to transmit data from edge devices is cost-prohibitive.
  • Connectivity: For moving deployments (such as connected vehicles) or for deployments in remote locales (such as an oil rig in the ocean), connectivity may not be reliable.
  • Latency: In situations where real-time data is required—say construction equipment designed to detect and avoid potential collisions—the data latency of the cloud is unacceptable.
  • Power consumption: Many sensors in edge devices cannot live up to the power-consumption demands required for transmitting data to the cloud.
  • Security: Most sensors—often limited in functionality—cannot provide the kind of security required in a digital economy with an expanding threat landscape.

Edge computing overcomes these obstacles with the use of an IoT gateway. Think of the gateway as a hub of sorts that lives in close proximity to the edge devices within a local area network (LAN). This hub—a full-blown server or something more purpose-built—can help conserve bandwidth by running an analytical algorithm to determine the business value of incoming sensor data and transmitting to the core only what makes sense. The hub also addresses the issue of intermittent connectivity by housing software and functionality that can be used to make decisions on the ground without access to the core. Similarly, latency and power-consumption issues are addressed through the hub, which communicates quickly and efficiently with sensors through fast, low-energy protocols such as Bluetooth or ZigBee.

On the security front, an edge computing hub can provide secure tunnels back to the digital core IT infrastructure. Remember that the October 2016, a denial of service attack—which brought down the Internet in North America and Europe—was executed with a botnet of unsecured IoT devices. As far as use cases for edge computing go, let’s call this one a slam dunk.

The role of microservices

Microservices as loosely coupled, independently deployed nuggets of application functionality. Communicating through APIs and running as unique processes, microservices are ideally suited for IoT scenarios. Why? Because they’re deployed in isolated containers so that if they fail, they don’t take down the entire network or interrupt an entire business process.

The idea is that microservices are created at the core and then delivered to the hub at the edge. The hub then makes them available to each device on the edge as needed. The algorithms that determine the business value of sensor data? These run in microservices. Predictive analytics—let’s say to predict the failure of an HVAC machine on the college campus? Also delivered via microservices. Indeed, microservices are what make the IoT a practical reality. Without them, the IoT would still be only a concept.

There’s a lot more to discuss on the topic of edge computing, but I’ll stop here for now. To dive in further, see this paper on the “4 Ps” of intelligent edge processing: Excelling at the Edge: Achieving Business Outcomes in a Connected World. Also look for my next blog in this series: “The IoT Data Explosion, IPv6, and the Need to Process at the Edge.”

Comments

Chuck Pharris

About Chuck Pharris

Chuck Pharris has over 25 years of experience in the manufacturing industries, driving sales and marketing at a large controls systems company, overseeing plant automation and supply chain management, and conducting energy analysis for the U.S. Department of Energy’s research laboratory. He has worked in all areas of plant automation, including process control, energy management, production management, and supply chain management. He currently is the global director of IoT marketing for SAP. Contact him at chuck.pharris@sap.com.

5 Reasons To Consider Managed Detection And Response For Cybersecurity

Dakota Murphey

There was once a time when you could install a firewall and say with relative confidence that your business was protected from cyber attacks. But as hackers, crackers, and cybercriminals get smarter, businesses need to invest more resources to keep data secure. This had led to increased demand for managed detection and response (MDR) services.

These services monitor your computer system at all times, detecting and neutralizing potential threats before they can become a big problem. Here are five reasons that you should consider investing in MDR.

1. Increasingly sophisticated cyber attacks

It is unfortunately true that cyber attacks get more sophisticated every day. Many businesses, large and small, have suffered data theft, ransomware attacks, and more as they relied on defenses that had simply become ineffective against the problem. In some cases, businesses were alerted to breaches and data loss only long after the incident had occurred and there was nothing they could do about it.

2. Inadequate in-house expertise

Small businesses are especially at risk because they are unlikely to have a skilled IT team in house. Even those with a dedicated IT support team are unlikely to have the kind of cybersecurity expertise required to prevent cyber attacks 24×7.

3. Incoming GDPR

The General Data Protection Regulation (GDPR) will apply in the EU from May 2018 and is set to have major ramifications. GDPR essentially put much stricter controls on how personal data is collected and stored. Additionally, however, there are much harsher punishments that will be imposed on businesses that lose key customer data. Fines are set to increase massively, which could mean that cyber attacks affecting businesses can have major financial implications beyond the inherent impact of business disruption and potential reputational damage.

4. The risk of being an easy target

According to Gartner, detection and response is going to become the most important form of cybersecurity for businesses by 2020. As companies upgrade their defenses and put new systems in place, they become harder to hack. The knock-on effect is that cybercriminals are less likely to target businesses with powerful defenses and instead will focus their efforts on those with outdated security. Those companies that don’t invest as soon as possible will find themselves behind the curve and make themselves an easy target.

5. A good investment

A service like MDR is able to detect a huge range of different cyber threats and deal with them appropriately. When you have MDR in place, you will be immediately notified of any threats, and cybersecurity experts can help you deal with the problem and provide advice. In short, MDR can save your business a significant amount of time and money that you would otherwise have to spend dealing with problems after suffering an attack.

Learn more about the importance of strong defenses in The Future of Cybersecurity: Trust as Competitive Advantage.

Comments

Dakota Murphey

About Dakota Murphey

Dakota Murphey is a tech writer specialising in cybersecurity, working with Redscan on this and a number of other GDPR, MDR, and ethical hacking projects.

The IoT Data Explosion, IPv6, And The Need To Process At The Edge

Chuck Pharris

Part 2 in the 3-part Edge Computing series

The Internet of Things (IoT) is growing. By how much? IDC predicts there will be 30 billion connected things worldwide by 2020.[1] After 2020? That’s anybody’s guess—but one clear indication that this growth will be big is the move to a new Internet addressing system: IPv6.

The problem is that the current system, IPv4, allows for only 4 billion addresses or so, which requires some devices to share addresses. With more and more sensors embedded in more and more things—each requiring an IP address—this state of affairs is unsustainable.

IPv6 solves this problem by bumping up the universe of available addresses to a number that’s hard to comprehend —something like 340,000,000,000,000,000,000,000,000,000,000,000,000 (or 340 trillion trillion trillion).[2]

But what about the data?

I can’t say that I expect 340 trillion trillion trillion IoT devices out there anytime soon. But as the IoT grows, the amount of data generated by proliferating sensors embedded in connected things will grow as well. And for organizations deploying IoT devices to move all this data back and forth via the cloud is simply untenable.

Hence the idea of edge processing. Edge processing, as I explained in a previous blog, is the idea of processing data on the “edge” where IoT devices are deployed—rather than sending all sensor-generated data back to mission central over the cloud. Without edge processing, I don’t think the IoT could be a reality.

But even if we were to revamp the planet’s Internet infrastructure, would you still find value in all that data? In fact, much of the data produced by sensors is not particularly useful. So instead of doing a rip and replace of the Internet, why not just process data at the edge and use an IoT gateway to run the analytics on site, sending back only what’s useful to mission central?

The four pillars

It is such practical concerns that make edge processing an appealing approach for real-world IoT deployments. But how do you move forward?

In a recent white paper, SAP explores some of the primary concerns, categorizing them according to the 4 P’s of intelligent edge processing: presence, performance, power, and protection. The paper examines these four pillars and focuses on better ways to cleanse, filter, and enrich the growing volumes of sensor data. Let’s a take a quick look.

Presence

Intelligent edge processing requires your systems to be present at the creation, as it were—on the edge, where the action take place. Using machine learning and smart algorithms on the edge, you can generate insight and take action without human intervention. This is good, because running in a more autonomous fashion is an imperative for the digital economy.

As an example, the paper dives into automated reordering and receiving using warehouse shelves equipped with sensors. A different example, though, is automated work orders triggered by analysis of events. This is interesting because the automated action—creation of a work order—requires a follow-on action involving humans, like putting a technician on site, let’s say. In this way, many organizations will use edge processing in conjunction with human beings doing things. It all depends on the scenario that works best in context.

Performance

Intelligent edge processing can improve performance for IoT scenarios by solving the problem of overwhelming traditional data-storage technologies. Take the example of processing in manufacturing where the goal is to approximate a standard set by the “golden batch” for all subsequent manufacturing runs. Combining operational technology with information technology, you can process the complex events that happen on the edge, and bring new batches closer into compliance with the golden batch. This helps improve manufacturing performance, from the perspectives of both speed and quality.

Power

Intelligent edge processing gives you the power to execute processes where they take place—without the latency of data transfer in the cloud. Take, for example, a remote mining operation with limited connectivity. Whatever processes occur on site—say, the ordering of replacement parts for mining equipment—can still be carried out with edge processing. Workers can record the order, and replacements can either be made where parts are locally available or put on hold until the part arrives. In either case, the need for the part is recorded, and the information can be synced opportunistically when a connection becomes available.

Protection

Intelligent edge processing can help deliver the security needed for IoT deployments. By their very nature, such deployments emphasize openness and are designed to work with other networks—many of which may not be under your control. With intelligent edge processing, you can track the unique identities of sensors in your network, encrypt any data sent out, and run the necessary checks on data coming in. On-site processing in this fashion, in fact, is required—because managing such security via the cloud would not only introduce data latency into the equation, but could also open up holes to be exploited by malicious actors.

So, yes, the IoT is growing—and along with it, the volumes of data companies are required to manage. This volume of data cannot be managed entirely via the cloud. Edge processing is a solution to this problem. Take a look at the “4 P’s” paper here: “Excellence at the Edge: Achieving Business Outcomes in a Connected World.” And stay tuned for my final blog in this series: “Edge Computing and the New Decentralization: The Rhyming of IT History.”

[1] http://www.idc.com/infographics/IoT

[2] https://www.google.com/intl/en/ipv6/

Comments

Chuck Pharris

About Chuck Pharris

Chuck Pharris has over 25 years of experience in the manufacturing industries, driving sales and marketing at a large controls systems company, overseeing plant automation and supply chain management, and conducting energy analysis for the U.S. Department of Energy’s research laboratory. He has worked in all areas of plant automation, including process control, energy management, production management, and supply chain management. He currently is the global director of IoT marketing for SAP. Contact him at chuck.pharris@sap.com.

Diving Deep Into Digital Experiences

Kai Goerlich

 

Google Cardboard VR goggles cost US$8
By 2019, immersive solutions
will be adopted in 20% of enterprise businesses
By 2025, the market for immersive hardware and software technology could be $182 billion
In 2017, Lowe’s launched
Holoroom How To VR DIY clinics

From Dipping a Toe to Fully Immersed

The first wave of virtual reality (VR) and augmented reality (AR) is here,

using smartphones, glasses, and goggles to place us in the middle of 360-degree digital environments or overlay digital artifacts on the physical world. Prototypes, pilot projects, and first movers have already emerged:

  • Guiding warehouse pickers, cargo loaders, and truck drivers with AR
  • Overlaying constantly updated blueprints, measurements, and other construction data on building sites in real time with AR
  • Building 3D machine prototypes in VR for virtual testing and maintenance planning
  • Exhibiting new appliances and fixtures in a VR mockup of the customer’s home
  • Teaching medicine with AR tools that overlay diagnostics and instructions on patients’ bodies

A Vast Sea of Possibilities

Immersive technologies leapt forward in spring 2017 with the introduction of three new products:

  • Nvidia’s Project Holodeck, which generates shared photorealistic VR environments
  • A cloud-based platform for industrial AR from Lenovo New Vision AR and Wikitude
  • A workspace and headset from Meta that lets users use their hands to interact with AR artifacts

The Truly Digital Workplace

New immersive experiences won’t simply be new tools for existing tasks. They promise to create entirely new ways of working.

VR avatars that look and sound like their owners will soon be able to meet in realistic virtual meeting spaces without requiring users to leave their desks or even their homes. With enough computing power and a smart-enough AI, we could soon let VR avatars act as our proxies while we’re doing other things—and (theoretically) do it well enough that no one can tell the difference.

We’ll need a way to signal when an avatar is being human driven in real time, when it’s on autopilot, and when it’s owned by a bot.


What Is Immersion?

A completely immersive experience that’s indistinguishable from real life is impossible given the current constraints on power, throughput, and battery life.

To make current digital experiences more convincing, we’ll need interactive sensors in objects and materials, more powerful infrastructure to create realistic images, and smarter interfaces to interpret and interact with data.

When everything around us is intelligent and interactive, every environment could have an AR overlay or VR presence, with use cases ranging from gaming to firefighting.

We could see a backlash touting the superiority of the unmediated physical world—but multisensory immersive experiences that we can navigate in 360-degree space will change what we consider “real.”


Download the executive brief Diving Deep Into Digital Experiences.


Read the full article Swimming in the Immersive Digital Experience.

Comments

Kai Goerlich

About Kai Goerlich

Kai Goerlich is the Chief Futurist at SAP Innovation Center network His specialties include Competitive Intelligence, Market Intelligence, Corporate Foresight, Trends, Futuring and ideation. Share your thoughts with Kai on Twitter @KaiGoe.heif Futu

Tags:

Jenny Dearborn: Soft Skills Will Be Essential for Future Careers

Jenny Dearborn

The Japanese culture has always shown a special reverence for its elderly. That’s why, in 1963, the government began a tradition of giving a silver dish, called a sakazuki, to each citizen who reached the age of 100 by Keiro no Hi (Respect for the Elders Day), which is celebrated on the third Monday of each September.

That first year, there were 153 recipients, according to The Japan Times. By 2016, the number had swelled to more than 65,000, and the dishes cost the already cash-strapped government more than US$2 million, Business Insider reports. Despite the country’s continued devotion to its seniors, the article continues, the government felt obliged to downgrade the finish of the dishes to silver plating to save money.

What tends to get lost in discussions about automation taking over jobs and Millennials taking over the workplace is the impact of increased longevity. In the future, people will need to be in the workforce much longer than they are today. Half of the people born in Japan today, for example, are predicted to live to 107, making their ancestors seem fragile, according to Lynda Gratton and Andrew Scott, professors at the London Business School and authors of The 100-Year Life: Living and Working in an Age of Longevity.

The End of the Three-Stage Career

Assuming that advances in healthcare continue, future generations in wealthier societies could be looking at careers lasting 65 or more years, rather than at the roughly 40 years for today’s 70-year-olds, write Gratton and Scott. The three-stage model of employment that dominates the global economy today—education, work, and retirement—will be blown out of the water.

It will be replaced by a new model in which people continually learn new skills and shed old ones. Consider that today’s most in-demand occupations and specialties did not exist 10 years ago, according to The Future of Jobs, a report from the World Economic Forum.

And the pace of change is only going to accelerate. Sixty-five percent of children entering primary school today will ultimately end up working in jobs that don’t yet exist, the report notes.

Our current educational systems are not equipped to cope with this degree of change. For example, roughly half of the subject knowledge acquired during the first year of a four-year technical degree, such as computer science, is outdated by the time students graduate, the report continues.

Skills That Transcend the Job Market

Instead of treating post-secondary education as a jumping-off point for a specific career path, we may see a switch to a shorter school career that focuses more on skills that transcend a constantly shifting job market. Today, some of these skills, such as complex problem solving and critical thinking, are taught mostly in the context of broader disciplines, such as math or the humanities.

Other competencies that will become critically important in the future are currently treated as if they come naturally or over time with maturity or experience. We receive little, if any, formal training, for example, in creativity and innovation, empathy, emotional intelligence, cross-cultural awareness, persuasion, active listening, and acceptance of change. (No wonder the self-help marketplace continues to thrive!)

The three-stage model of employment that dominates the global economy today—education, work, and retirement—will be blown out of the water.

These skills, which today are heaped together under the dismissive “soft” rubric, are going to harden up to become indispensable. They will become more important, thanks to artificial intelligence and machine learning, which will usher in an era of infinite information, rendering the concept of an expert in most of today’s job disciplines a quaint relic. As our ability to know more than those around us decreases, our need to be able to collaborate well (with both humans and machines) will help define our success in the future.

Individuals and organizations alike will have to learn how to become more flexible and ready to give up set-in-stone ideas about how businesses and careers are supposed to operate. Given the rapid advances in knowledge and attendant skills that the future will bring, we must be willing to say, repeatedly, that whatever we’ve learned to that point doesn’t apply anymore.

Careers will become more like life itself: a series of unpredictable, fluid experiences rather than a tightly scripted narrative. We need to think about the way forward and be more willing to accept change at the individual and organizational levels.

Rethink Employee Training

One way that organizations can help employees manage this shift is by rethinking training. Today, overworked and overwhelmed employees devote just 1% of their workweek to learning, according to a study by consultancy Bersin by Deloitte. Meanwhile, top business leaders such as Bill Gates and Nike founder Phil Knight spend about five hours a week reading, thinking, and experimenting, according to an article in Inc. magazine.

If organizations are to avoid high turnover costs in a world where the need for new skills is shifting constantly, they must give employees more time for learning and make training courses more relevant to the future needs of organizations and individuals, not just to their current needs.

The amount of learning required will vary by role. That’s why at SAP we’re creating learning personas for specific roles in the company and determining how many hours will be required for each. We’re also dividing up training hours into distinct topics:

  • Law: 10%. This is training required by law, such as training to prevent sexual harassment in the workplace.

  • Company: 20%. Company training includes internal policies and systems.

  • Business: 30%. Employees learn skills required for their current roles in their business units.

  • Future: 40%. This is internal, external, and employee-driven training to close critical skill gaps for jobs of the future.

In the future, we will always need to learn, grow, read, seek out knowledge and truth, and better ourselves with new skills. With the support of employers and educators, we will transform our hardwired fear of change into excitement for change.

We must be able to say to ourselves, “I’m excited to learn something new that I never thought I could do or that never seemed possible before.” D!

Comments