Sections

Forget Consumer IoT—Industrial IoT Will Be The Revolution

Danielle Beurteaux

Recent big news in the IoT-sphere isn’t about intelligent toasters or sentient vacuum cleaners. It’s about the Industrial Internet of Things, which is about to blow us out of the water. Tom  Siebel, founder of Siebel Systems, recently renamed his latest company C3 IoT (previously known as C3, the firm focused on energy), with a broadened aim to provide enterprise-level software to a broader range of industries, from utilities to aerospace.

Computer maker Dell also recently launched an industrial IoT environment PC, and GE and Tata Consultancy Services partnered and will start off with GE’s industrial cloud, Predix, which GE launched last summer.

Research from Accenture claims that IIoT could increase GDPs in 20 economies by a total of $10.6 trillion by the year 2030 . That’s based on current IIoT investment trends; with more investment, the potential growth is even greater.

A white paper by IDC and sponsored by SAP, “IoT and Digital Transformation: A Tale of Four Industries,” looked at manufacturing, healthcare, retail, and consumer products and found that “business benefits from IIoT will be realized at different speeds and on different scales.”

They’re calling it Industrie 4.0 in Germany, but regardless of the name, getting from where we are today to a future of ubiquitous IIoT has some hurdles, according to Kai Goerlich, idea director of thought leadership at SAP. Will we be ready?

SAP: How will we make a living?

KG: The last digitization was largely driven by telecommunications. We could view mobility as the first wave of IoT. The difference now is the mobility connected people, and IoT is connecting everything into a large grid, mesh, whatever you want to call it. The danger is job loss. The World Economic Forum had a graph; in highly automated countries, job loss won’t be that high, about a 10% risk. But the U.S. has a 30 % risk. The U.S. is still relatively service heavy with many people in functions as compared to Germany, which has already automated a lot.

IIoT, or Industrie 4.0, in my opinion will lead to a total redefinition of how markets run and economic production without humans. Automation poses the risk that we automate so fast that society can’t adapt. It took us 60 years from the 40’s and 50’s to fully automate operations, and now within 20 years, we have the Internet and mobile. It’s really a very fast speed; within a short lifetime two or three revolutions and our systems are not fast enough to react to it.

SAP: Will business models change?

KG: IIoT is a big game changer for business models. We’re taking out some of the in-between process in the value chain with direct one-to-one consumer sales. The old economy was run on the old sequential value chain. Digitization completely wiped out the value chain.

SAP: How important are data and interoperability?

KG: On the good side, with more sensors in all devices, we could make more sense out of the world. If we can exchange data, have more data points, our picture of the world may be more real time and realistic than in the past. That needs interoperability—make things work together and data exchange and create insights.

The real money is where data is, you can already see this happening. All use cases are basically on the data level. It will be totally ambient; in 20 years everything will talk to us. IoT was invented around 2000, but it won’t be used for much longer. Industries are already defining it differently—remote maintenance, connected, etc.

IIoT will digitize physical assets and make everything connected. Estimates put savings at 50% of fixed assets costs, that’s a big sum. If you just take the top 10 companies in each industry, there’s a lot of money in it. A lot of savings in sharing products and lifecycle maintenance.

For more insight on how IoT is impacting real-world businesses, see The Internet Of Things And Digital Transformation: A Tale Of Four Industries.

 

Comments

IoT, Sensors, And All Things Digital: Can We Handle It All?

Kai Goerlich

It seems we are all part of a big experiment. We’re testing our data-driven consciousness and determining how much information we can digest and at what speed. And if we continue at our current pace, we will soon see sensors and ambient computing infuse our personal and professional lives with a myriad of interactive things. Even items such as coffee mugs may be connected.

In such a digital-driven, hyperconnected world, our perceptions will heavily depend on virtual experiences and our biological view. To get a sense of where we are heading, I had the pleasure of sitting down with Dr. Yvonne Förster, professor of philosophy of culture at Leuphana University in Germany, to get her perspective.

Q: The speed of digital change is just incredible. It’s hard to imagine that we will always be able to keep up. Will humans find a way to cope with the speed of digital change?

A: Without a doubt, algorithms are faster than our conscious thinking. And although our physical reflexes and our intuition are quite fast, they are still slower than the latest computers available right now. From this perspective, we face many challenges.

Much of our future experiences in a digitized world will be powered by technological devices that operate on micro-timescales. The Internet of Things is a term that describes technologically permeated lifeworlds comprising billions of sensors and highly interconnected devices, which measure – or more precisely, sense – various activities in their environment. The interesting question is not so much about coping, but the perspectives and possible futures of human life itself. How will we evolve while changing our lifeworld?

Q: If we cannot operate at the speed of computers, will we experience disruptions between more direct, data-induced experiences?

A: Not necessarily. Disruptions are exactly what modern technology tries to avoid. Smooth operation and flow are ideal in technology and design, allowing applications to be invisible and creating self-learning systems.

Understanding how technology influences our perception today is a subject of aesthetic research. Media artists try the impossible: Make the invisible visible or render the nonexperiential rhythm and speed of algorithms experiential. It would be naïve to think that the exponential growth in exposure to technological devices would leave people unchanged.

Evolution goes on in culture. And nowadays, we are not just passively shaped by adaptive behavior; we can also actively alter our bodies and minds. New digitized environments and our own wish to extend human life will be fundamental forces in the game of evolution, which we should carefully reflect on.

Q: Will we still experience our environment without additional interactions? Will nature become a dull world to us?

A: I don’t think so. The world will be a fascinating place in the next few decades when it comes to technological development. It doesn’t necessarily mean that every device will remove us from our environments like the dystopian world portrayed in movies such as The Matrix or Surrogates. But, we also shouldn’t forget that our experiences in nature and everything else are always mediated by cultural concepts, attitudes, and technological devices.

Just think of the perception of time, which has been interceded through watches of all kinds for centuries now. Some would even hold that human culture is essentially rooted in technology from its very beginning. Still, as long as we preserve nature, our world will never be a dull place. There are just new perspectives to discover.

Q: How will wearables and sensors help us achieve new perspectives?

A: The interesting question here is: How will our lifeworld and behavior change when sensors are present everywhere? With the omnipresence of sensors and devices that sense locations and other types of human agency, we find ourselves in an environment that is not only tracked by living beings, but also by highly interconnected technological devices. You could even one day say that walls, streets, or cars have eyes in the most literal sense possible.

Sensing is not a concept only about living organisms anymore. Rather, it’s a ubiquitous property of our lifeworld. This will deeply change how we act and interact with each other – but more important, it will transform how we engage with objects. Our lifeworld is altered by the Internet of Things as objects sense and communicate among themselves. The impact of this technological development has yet to be estimated and described.

Q: Will everyone become digitized?

A: If we define digitization as a significant part of everyday life that is connected to digital technology, then we are already digitized. However, if we mean that technology will invade our bodies and turn us into cyborgs that are physically connected to the Internet, this is already becoming a reality in laboratories. This idea is strongly connected to enhancing the human body and mind. Still, most people remain skeptical when it comes to technology invading the body.

We can think of a third alternative of digitization: The co-evolution of humans and technology. When our world is deeply permeated by technology, it will present different and new opportunities to humans. We can develop new ways of behavior, creativity, and thinking. Also, we will need to engage with technology and actively reflect on its use.

This approach calls for an understanding of technology as a precondition for handling such innovation critically and creatively. We see these kinds of engagement emerging from artistic and scientific practice. Jennifer Gabrys, for example, works with sensor technology used by citizens in different environments, such as fracking areas, to better understand and build awareness around changing environmental conditions by using do-it-yourself technology.

Q: Will we have a choice in what we do – or do not – want to know?

A: Yes, we certainly have a choice. As biological beings, we are adaptive. The presence of technology is evolving – and will continue to change – our perception and behavior. If we don’t reflect on that process, we will remain passive and eventually feel outrun by technology.

Still, technology is our making, even though it is not entirely predictable and manageable. Given that technology functions according to emerging patterns of artificial intelligence, we should be prepared to engage in new processes of understanding and agency in computed environments.

Q: Will digitization change the way I experience my body?

A: The playful element of digitization will change the way we learn as well as the knowledge space of what can be known. It’s not just transformation of the thinking process or the quality of decisions, but an evolution of the body as well. In gaming, for example, we use evolutionary, yet old and hard-wired, behaviors such as flight-or-fight reflexes. This means digital gaming is less about our culturally and highly rated reflection, but more on gut feeling and our intuitive mode of acting. But, it might also bring about completely new patterns of behavior and action or reaction.

Another aspect of bodily experiences in times of digitization is the measurement of movement and live data such as heartbeat, blood pressure, and more. This is accompanied by an objectification of a bodily experience. We tend to perceive ourselves as numbers, such as the number of steps we have walked or the calories we have eaten. This might be problematic because it can distract us from our actual bodily state, which is not tantamount to a number or chart appearing on a screen.

The flipside of this is the issue of Big Data and control. Where does this information go, and who uses it? Will your insurance company be interested what your everyday habits are? This seems very likely and should be observed carefully.

Q: How will we experience the world in the future? Will it be in the form of data streams?

A: The world around us is getting sense-driven; it will have eyes and ears. I am waiting for the day when my refrigerator starts arguing with me when I grab a piece of steak instead of a salad. But more interesting is the question of what happens when information goes beyond being presented as text, video, or speech to include body temperature, heart rate, and the pitch of our voice. What kind of knowledge will be generated out of this data?

In the movie Ex Machina, such information leads to the first self-conscious android named AVA. But, I am sure that we will not perceive data streams. Data by itself has no value as long as it is not interpreted. Also, our brains are not an information-processing organ. It generates information only through sense-making activities.

Life never deals with raw data. Movement and perception are to be understood as relational activities, which bring about meaningful structures such as me as an individual and you as another person. Similarly, we will conceive technology as part of our environment and, therefore, part of a sense-making process that extends beyond human perception.

Q: If data could be experienced directly one day, where is the border that separates us from it?

A: Current technologies, such as augmented reality and Google glasses, will not change very much. Even if the physical and virtual worlds merge, these technologies will not interfere with our sense of self. The sense of self is already a stretchy category since cultural practices can alter it profoundly. Mediation techniques, for example, can broaden our ability to be compassionate and make the self subside in meditation and agency.

Another interesting development is the use of invasive techniques that substitute or change our perceptual and cognitive abilities. An example is Neil Harbisson, who can hear colors, or Enno Park, who is using a hearing aid with a speech processor that transfers sounds into digital signals that are sent directly to the brain.

Merging human bodies and technology can create new forms of sensing and acting. Even the ontological gap between what is human and what is technology might become blurred. But this is old news. The self has – and never had – any fixed limitations. We become what we are by interacting with each other and our environment. And we are always evolving; no self is ever complete. The moment you meet another person, you undergo a change. This is why we should not be afraid of losing ourselves in the future.

Q: Will we co-evolve with machines, rather than creating a world similar to The Terminator?

Certainly the merging of humans and machines is an interesting idea as it promises to overcome human limitations. It’s part of our human nature to adapt, and I have the impression that we are entering an era of a new form of cultural evolution that combines biological, technological, and cultural practices.

The most important lesson we will learn is that technology will develop in unforeseen, not programmable, ways. This might destroy the myth of the human as a rational being who can understand and predict the reasons and consequences of an action. Humanity is a very creative species, but we have a hard time understanding complex and nonlinear processes. These processes have become ubiquitous since the Internet became our second nature and stock markets are partly controlled by algorithms.

Complex processes also lie at the core of life. The best example is our own brain whose inner workings are highly complex and nonlinear. Still, we lack the cognitive abilities to understand them. This is why we should experiment and reflect on the possibilities of a life form that engages with technology as a complex process and cannot be simply controlled and predicted. Issues of data privacy, information ownership, and governance need to be discussed in light of ecological entanglement with technology.

For more on this topic, see Live Business: The Importance of the Internet of Things.

Comments

Kai Goerlich

About Kai Goerlich

Kai Goerlich is the Idea Director of Thought Leadership at SAP. His specialties include Competitive Intelligence, Market Intelligence, Corporate Foresight, Trends, Futuring and ideation. Share your thoughts with Kai on Twitter @KaiGoe.

Waveform Generators: An Overview

Tiffany Rowe

When you need to make electronic measurements of any kind, you probably will consider using an oscilloscope or logic analyzer. However, those tools will only work when there is actually a signal to measure, and not everything that needs to be measured can make its own signal. When that happens, measurement can only take place if the signal is provided externally, and that’s where waveform generators come into play.

But what type of waveform generator do you need? There are several different types, all of which have a different purpose and create different types of waveforms. Because so many new products within the Internet of Things require signal generation, it’s important to select the right waveform generator to ensure that your device works properly and delivers accurate information to the user.

Waveform generators in brief

There are three main types of waveform signal generators: Function generators, arbitrary function generators, and arbitrary waveform generators.

A function generator generates periodic standard functions such as sine, square, triangle, ramp up/down, DC, and noise. Typically, they produce waveforms in sine, square, triangular, and sawtooth shapes. The least sophisticated waveform generators, function generators are not ideal when you need a stable waveform or a low noise. However, for testing electronic equipment like amplifiers, a function generator can produce a stable signal, or introduce an error signal as well as white noise, to ensure the proper function of the equipment.

An arbitrary function generator (AFG) is similar to a function generator, but also offers on-board memory space for a user-defined waveform. In addition to predefined sets of waveforms, the AFG also allows the user to define a waveform, save it in the generator’s memory, and then output it. This gives you the capability to define a waveform, store it to the AFG’s on-board memory, and then output the waveform using direct digital synthesis.  AFGs are useful for testing similar applications and devices as the function generator, but give you the option of creating more unique waveforms for greater flexibility.

Finally, an arbitrary waveform generator (AWG) produces both the standard waveforms and large, complex, user-defined waveforms, including those that are linked or looped combinations for unique sequences. AWGs require the most memory of all types of generators and use a clocking scheme that allows the device to only create waves in the order in which they are placed in the memory. This puts some limitations on the frequency of output.  In other words, while the user can define a specific waveform, the precision of that waveform may be limited.

Which waveform generator is appropriate for the specific device depends on the specific device and the need for precision in the waveforms. Cost, power use, safety, and security concerns also play a role.

Waveform generators and the IoT

Thanks to the growth of the IoT, waveform generators are seeing a sort of renaissance in terms of development. Embedded systems developers are, more than ever before, looking for waveform generators that are flexible, easy-to-use, and accurate. One of the major trends in waveform generator development, then, is to create generators that are able to work in both analog and digital modes and meet a wide variety of signal generation needs.

Some of the most common applications of waveforms are in the medical field, and the advances in waveform generation are already being seen in the delivery of patient care. For example, new technology allows for healthcare providers to automatically view waveform data on their mobile devices, giving them the chance to monitor and respond to patients in real time. Waveforms from telemetry devices are also being automatically recorded into electronic health records, not only giving providers access to the data in a timely manner, but reducing the risk of errors inherent in the manual transmission of data.

Waveform generation is also a key part of testing IoT devices. Communication channels need to be tested and debugged to ensure that messages between the device and the hub are being sent and received as expected. For these tests, sensors collect data and convert it to an analog waveform, which is then examined and compared to other waveforms to ensure proper function. Without the waveforms, though, there would be nothing to test.

Choosing the right waveform generator for your IoT device can make a significant difference in its function, not to mention the cost. As you develop your embedded system, compare the options and choose the generator that will best meet your expectations and requirements.

For more insight on the Internet of Things, see Live Business: The Digitization of Everything.

Comments

Tiffany Rowe

About Tiffany Rowe

Tiffany Rowe is a marketing administrator who assists in contributing resourceful content. Tiffany prides herself in her ability to provide high-quality content that readers will find valuable.

Tags:

innovation , IOT

The Robotics Race

Stephanie Overby

As robotic technologies continue to advance, along with related technologies such as speech and image recognition, memory and analytics, and virtual and augmented reality, better, faster, and cheaper robots will emerge. These machines – sophisticated, discerning, and increasingly autonomous – are certain to have an impact on business and society. But will they bring job displacement and danger or create new categories of employment and protect humankind?

We talked to SAP’s Kai Goerlich, along with Doug Stephen of the Institute for Human and Machine Cognition and Brett Kennedy from NASA’s Jet Propulsion Laboratory, about the advances we can expect in robotics, robots’ limitations, and their likely impact on the world.

SAP_Robotics_QA_images2400x16002

qa_qWhat are the biggest drivers of the robot future?

Kai Goerlich: Several trends will come together to drive the robotics market in the next 15 to 20 years. The number of connected things and sensors will grow to the billions and the data universe will likewise explode. We think the speed of analytics will increase, with queries answered in milliseconds. Image and voice recognition – already quite good – will surpass human capabilities. And the virtual and augmented reality businesses will take off. These technologies are all building blocks for a new form of robotics that will vastly expand today’s capabilities in a diversity of forms and applications.

Brett Kennedy: When I was getting out of school, there weren’t that many people working in robotics. Now kids in grade school are exposed to a lot of things that I had to learn on the job, so they come into the workplace with a lot more knowledge and fewer preconceptions about what robots can or can’t do based on their experiences in different industries. That results in a much better-trained workforce in robotics, which I think is the most important thing.

In addition, many of the parts that we need for more sophisticated robots are coming out of other fields. We could never create enough critical mass to develop these technologies specifically for robotics. But we’re getting them from other places. Improvements in battery technology, which enable a robot to function without being plugged in, are being driven by industries such as mobile electronics and automotive, for example. Our RoboSimian has a battery drive originally designed for an electric motorcycle.

qa_qDo you anticipate a limit to the tasks robots will be able to master as these core technologies evolve?

Goerlich: Robots will take over more and more complex functions, but I think the ultimate result will be that new forms of human-machine interactions will emerge. Robots have advantages in crunching numbers, lifting heavy objects, working in dangerous environments, moving with precision, and performing repetitive tasks. However, humans still have advantages in areas such as abstraction, curiosity, creativity, dexterity, fast and multidimensional feedback, self-motivation, goal setting, and empathy. We’re also comparatively lightweight and efficient.

Doug Stephen: We’re moving toward a human-machine collaboration approach, which I think will become the norm for more complex tasks for a very long time. Even when we get to the point of creating more-complex and general-purpose robots, they won’t be autonomous. They’ll have a great deal of interaction with some sort of human teammate or operator.

qa_qHow about the Mars Rover? It’s relatively autonomous already.

Kennedy: The Mars Rover is autonomous to a certain degree. It is capable of supervised autonomy because there’s no way to control it at that distance with a joystick. But it’s really just executing the intent of the operator here on the ground.

In 2010, DARPA launched its four-year Autonomous Robotic Manipulator Challenge to create machines capable of carrying out complex tasks with only high-level human involvement. Some robots completed the challenge, but they were incredibly slow. We may get to a point where robots can do these sorts of things on their own. But they’re just not as good as people at this point. I don’t think we’re all going to be coming home to robot butlers anytime soon.

Stephen: It’s extremely difficult to program robots to behave as humans do. When we trip over something, we can recover quickly, but a robot will topple over and damage itself. The problem is that our understanding of our human abilities is limited. We have to figure out how to formally define the processes that human beings or any legged animals use to maintain balance or to walk and then tell a robot how to do it.

You have to be really explicit in the instructions that you give to these machines. Amazon has been working on these problems for a while with its “picking challenge”: How do you teach a robot to pick and pack boxes the way a human does? Right now, it’s a challenge for robots to identify what each item is.

qa_qSo if I’m not coming home to a robot butler in 20 years, what am I coming home to?

Goerlich: We naturally tend to imagine humanoid robots, but I think the emphasis will be on human-controlled robots, not necessarily humanshaped units. Independent robots will make sense in some niches, but they are more complex and expensive. The symbiosis of human and machine is more logical. It will be the most efficient way forward. Robotic suits, exoskeletons, and robotic limbs with all kinds of human support functions will be the norm. The future will be more Iron Man than Terminator.

qa_qWhat will be the impact on the job market as robots become more advanced?

SAP_Robotics_QA_images2400x16004Goerlich: The default fear is of a labor-light economy where robots do most of the work and humans take what’s left over. But that’s lastcentury thinking. Robots won’t simply replace workers on the assembly line. In fact, we may not have centralized factories anymore; 3D printing and the maker movement could change all that. And it is probably not the Terminator scenario either, where humanoid robots take over the world and threaten humankind. The indicators instead point to human-machine coevolution.

There’s no denying that advances in robotics and artificial intelligence will displace some jobs performed by humans today. But for every repetitive job that is lost to automation, it’s possible that a more interesting, creative job will take its place. This will require humans to focus on the skills that robots can’t replicate – and, of course, rethink how we do things and how the economy works.

qa_qWhat can businesses do today to embrace the projected benefits of advanced robotics?

Kennedy: Experiment. The very best things that we’ve been able to produce have come from people having the tools an d then figuring out how they can be used. I don’t think we understand the future well enough to be able to predict exactly how robots are going to be used, but I think we can say that they certainly will be used. Stephanie Overby is an independent writer and editor focused on the intersection of business and technology.

Stephanie Overby  is an independent writer and editor focused on the intersection of business and technology

To learn more about how humans and robots will co-evolve, read the in-depth report Bring Your Robot to Work.

Download the PDF

Comments

Tags:

What Is The Key To Rapid Innovation In Healthcare?

Paul Clark

Healthcare technology has already made incredible advancements, but digital transformation of the healthcare industry is still considered in its infancy. According to the SAP eBook, Connected Care: The Digital Pulse of Global Healthcare, the possibilities and opportunities that lie ahead for the Internet of Healthcare Things (IoHT) are astounding.

Many health organizations recognize the importance of going digital and have already deployed programs involving IoT, cloud, Big Data, analytics, and mobile technologies. However, over the last decade, investments in many e-health programs have delivered only modest returns, so the progress of healthcare technology has been slow out of the gate.

What’s slowing the pace of healthcare innovation?

In the past, attempts at rapid innovation in healthcare have been bogged down by a slew of stakeholders, legacy systems, and regulations that are inherent to the industry. This presents some Big Data challenges with connected healthcare, such as gathering data from disparate silos of medical information. Secrecy is also an ongoing challenge, as healthcare providers, researchers, pharmaceutical companies, and academic institutions tend to protect personal and proprietary data. These issues have caused enormous complexity and have delayed or deterred attempts to build fully integrated digital healthcare systems.

So what is the key to rapid innovation?

According to the Connected Care eBook, healthcare organizations can overcome these challenges by using new technologies and collaborating with other players in the healthcare industry, as well as partners outside of the industry, to get the most benefit out of digital technology.

To move forward with digital transformation in healthcare, there is a need for digital architectures and platforms where a number of different technologies can work together from both a technical and a business perspective.

The secret to healthcare innovation: connected health platforms

New platforms are emerging that foster collaboration between different technologies and healthcare organizations to solve complex medical system challenges. These platforms can support a broad ecosystem of partners, including developers, researchers, and healthcare organizations. Healthcare networks that are connected through this type of technology will be able to accelerate the development and delivery of innovative, patient-centered solutions.

Platforms and other digital advancements present exciting new business opportunities for numerous healthcare stakeholders striving to meet the increasing expectations of tech-savvy patients.

The digital evolution of the healthcare industry may still be in its infancy, but it is growing up fast as new advancements in technology quickly develop. Are you ready for the next phase of digital transformation in the global healthcare industry?

For an in-depth look at how technology is changing the face of healthcare, download the SAP eBook Connected Care: The Digital Pulse of Global Healthcare.

See how the digital era is affecting the business environment in the SAP eBook The Digital Economy: Reinventing the Business World.

Discover the driving forces behind digital transformation in the SAP eBook Digital Disruption: How Digital Technology is Transforming Our World.

Comments

About Paul Clark

Paul Clark is the Senior Director of Technology Partner Marketing at SAP. He is responsible for developing and executing partner marketing strategies, activities, and programs in joint go-to-market plans with global technology partners. The goal is to increase opportunities, pipeline, and revenue through demand generation via SAP's global and local partner ecosystems.