The Role Of Imagination In Creating The Next Set Of Breakthrough Innovations

Mukesh Gupta

I stumbled across “As We May Think,” an article that Director of the Office of Scientific Research and Development Vannevar Bush wrote in 1945.

This article validates for me the role that imagination can play in the innovation process. Dr. Bush was able to predict in part what the world would look and feel like many years into the future. Here’s what I learned from his approach to ideation and how we can use it to assist in our own quest for using imagination to come up with innovations of the future.

Before trying to predict the future, understand the present

First, Bush thoroughly analyzed the present day (mid-1940s) situation, including what WWII had fostered and what it hindered, from the perspective of scientific inquiry and progress. He shared his thoughts on the state of scientific research and where science had seen progress and where it stood still.

Identify potentialities by extrapolation

He then extrapolated into the future by identifying the potentialities in the progress made, and shared what he though would happen in the near-to-short term if things had continued on the same trajectory. This is where he talked about immediate and imminent progress based on what was already happening. Most futurists and trend predictors use this process to forecast their trends.

Now, let your imagination fly

Once he built a good, solid foundation by identifying the progress made and what was expected in the near-to-short term, he then allowed his imagination to take flight. He talked about the camera becoming so small that someone would could carry one strapped onto their foreheads (sounds to me like a GoPro):

The camera hound of the future wears on his forehead a lump a little larger than a walnut.

He then explored and explained what the film and printing process would look like:

Often it would be advantageous to be able to snap the camera and to look at the picture immediately.

He imagined advances in micro-film technology that would enable the whole of Encyclopedia Britannica (one of the largest book collections at that time) to be available on something the size of a matchbox.

The Encyclopedia Britannica could be reduced to the volume of a matchbox. A library of a million volumes could be compressed into one end of a desk. If the human race has produced since the invention of movable type a total record, in the form of magazines, newspapers, books, tracts, advertising blurbs, correspondence, having a volume corresponding to a billion books, the whole affair, assembled and compressed, could be lugged off in a moving van.

The material for the microfilm Britannica would cost a nickel, and it could be mailed anywhere for a cent.

In addition to storing all of this knowledge in a small size, he said it is also important to create new knowledge and do so in an easy and simple way. He talked about a device into which someone speaks (in a specific way) and the device converts this into the appropriate text (sounds a lot like voice-to-text devices – Siri?)

To make the record, we now push a pencil or tap a typewriter. Then comes the process of digestion and correction, followed by an intricate process of typesetting, printing, and distribution. To consider the first stage of the procedure, will the author of the future cease writing by hand or typewriter and talk directly to the record? He does so indirectly, by talking to a stenographer or a wax cylinder; but the elements are all present if he wishes to have his talk directly produce a typed record. All he needs to do is to take advantage of existing mechanisms and to alter his language.

He then took flight in his imagination to put all of this together and predict what it would feel like to live in an era with such devices:

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together. If he goes into the field, he may be connected by radio to his recorder. As he ponders over his notes in the evening, he again talks his comments into the record. His typed record, as well as his photographs, may both be in miniature, so that he projects them for examination.

He acknowledged that a lot would need to happen between 1945’s reality and his imagined reality, but he was confident that it was all possible. He showed how past progress implied that the pace of innovation and creativity would only accelerate, meaning his imagined reality would not be very far from the time he was writing the piece.

Next he described mathematical inquiry and his definition of a mathematician:

A mathematician is not a man who can readily manipulate figures; often he cannot. He is not even a man who can readily perform the transformations of equations by the use of calculus. He is primarily an individual who is skilled in the use of symbolic logic on a high plane, and especially he is a man of intuitive judgment in the choice of the manipulative processes he employs.

This is probably the closest definition that I have come across for a data scientist. Bush said machines would do actual mathematical calculations and enable the mathematician to think about a higher order of logic. He also understood that the potential of such a machine is not limited to the scientist.

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed – that is, whenever thought for a time runs along an accepted groove – there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.

I think this sounds like a general purpose computer or even a smartphone. He then goes on to imagine how a retail store could be run if all these innovations became a reality. It sounds a lot like an ERP system running the entire store and its operations.

He also predicted that machines can be taught to learn and operate, not just on selection by indexing, but by association, and that machines would be able to beat humans (the story of IBM’s Watson winning Jeopardy?) – what’s called today machine learning.

Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In minor ways he may even improve, for his records have relative permanency. The first idea, however, to be drawn from the analogy concerns selection. Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.

He described a personal machine (he calls it “memex”) that stores all the information and data that we need as individuals (including all the knowledge that humans have accumulated over the centuries) and is available whenever a person wants it. Information would be accessible by associative indexing (sounds like hyperlinking to me), which would allow us to move across connected and relevant topics.

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English longbow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Sounds a lot like a combination of Google, Wikipedia, and Evernote to me.

He then goes on to talk about the fact that science is a tool that could create weapons and innovations that could not only enable humanity to keep track of its history, but create a completely new future as well.

Applied imagination

In a single 1945 article, Vannevar Bush imagined so many innovations that we enjoy today, seven decades later. He imagined things similar to GoPro, selfie sticks, Google Glass, ERP systems, digitized Encyclopedia Britannica, search engines, note-taking in the cloud, voice-to-text and text-to-voice conversions, personal computers, mobile phones, and much more.

This shows that if we start from the place where we are today, apply our imagination, and take leaps of faiths, we can imagine what the future will look like and then go after this future with all our current strengths.

This ability to imagine is critical for all of us who wish to be part of the generation of innovators who will define what and how our future shapes up.

How to develop this ability to imagine

In “The Real Neuroscience of Creativity,” Scott Barry Kaufman talks about three kinds of neural networks – the Executive Attention Network (activated when we need focused attention to do something specific), the Imagination Network (also called the default network), and the Salience Network (acts as the “switching” network and decides which neural network needs to be activated when).

… the Default Network (referred to here as the Imagination Network) is involved in “constructing dynamic mental simulations based on personal past experiences such as used during remembering, thinking about the future, and generally when imagining alternative perspectives and scenarios to the present.” The Imagination Network is also involved in social cognition. For instance, when we are imagining what someone else is thinking, this brain network is active. The Imagination Network involves areas deep inside the prefrontal cortex and temporal lobe (medial regions), along with communication with various outer and inner regions of the parietal cortex.

Conclusion

What this tells me is that the ability to imagine is inherently human and we are all capable of letting our imagination soar, if we want to.

So, the inability to imagine new or alternate realities is totally self-induced – and sometimes induced by our systems (e.g., education and even the culture of our organizations). This also means that it is in our very hands to set this right and start imagining alternate realities. The more we practice, the better we will get at it.

The more important it is for us to innovate and create, the more critical the skill to imagine alternate realities.

When Vannevar wrote this piece, it was a time where technological breakthroughs were imminent.

We are again at the same crossroads & technological breakthroughs are imminent. The question we need to now ask is:

Will we bring in the breakthroughs, or will we stand and wait for someone to do it for us?

PS: You can view a visual tour of Vannevar Bush’s Work below:

More predictions: AI will make customers and employees happier – as long as it learns to respect our boundaries. Learn more about Empathy: The Killer App for Artificial Intelligence.

Comments

Mukesh Gupta

About Mukesh Gupta

Mukesh Gupta previously held the role of Executive Liaison for the SAP User group in India. He worked as the bridge between the User group and SAP (Development, Consulting, Sales and product management).

How To Design Your Company’s Digital Transformation

Sam Yen

The September issue of the Harvard Business Review features a cover story on design thinking’s coming of age. We have been applying design thinking within SAP for the past 10 years, and I’ve witnessed the growth of this human-centered approach to innovation first hand.

Design thinking is, as the HBR piece points out, “the best tool we have for … developing a responsive, flexible organizational culture.”

This means businesses are doing more to learn about their customers by interacting directly with them. We’re seeing this change in our work on d.forum — a community of design thinking champions and “disruptors” from across industries.

Meanwhile, technology is making it possible to know exponentially more about a customer. Businesses can now make increasingly accurate predictions about customers’ needs well into the future. The businesses best able to access and pull insights from this growing volume of data will win. That requires a fundamental change for our own industry; it necessitates a digital transformation.

So, how do we design this digital transformation?

It starts with the customer and an application of design thinking throughout an organization – blending business, technology and human values to generate innovation. Business is already incorporating design thinking, as the HBR cover story shows. We in technology need to do the same.

Design thinking plays an important role because it helps articulate what the end customer’s experience is going to be like. It helps focus all aspects of the business on understanding and articulating that future experience.

Once an organization is able to do that, the insights from that consumer experience need to be drawn down into the business, with the central question becoming: What does this future customer experience mean for us as an organization? What barriers do we need to remove? Do we need to organize ourselves differently? Does our process need to change – if it does, how? What kind of new technology do we need?

Then an organization must look carefully at roles within itself. What does this knowledge of the end customer’s future experience mean for an individual in human resources, for example, or finance? Those roles can then be viewed as end experiences unto themselves, with organizations applying design thinking to learn about the needs inherent to those roles. They can then change roles to better meet the end customer’s future needs. This end customer-centered approach is what drives change.

This also means design thinking is more important than ever for IT organizations.

We, in the IT industry, have been charged with being responsive to business, using technology to solve the problems business presents. Unfortunately, business sometimes views IT as the organization keeping the lights on. If we make the analogy of a store: business is responsible for the front office, focused on growing the business where consumers directly interact with products and marketing; while the perception is that IT focuses on the back office, keeping servers running and the distribution system humming. The key is to have business and IT align to meet the needs of the front office together.

Remember what I said about the growing availability of consumer data? The business best able to access and learn from that data will win. Those of us in IT organizations have the technology to make that win possible, but the way we are seen and our very nature needs to change if we want to remain relevant to business and participate in crafting the winning strategy.

We need to become more front office and less back office, proving to business that we are innovation partners in technology.

This means, in order to communicate with businesses today, we need to take a design thinking approach. We in IT need to show we have an understanding of the end consumer’s needs and experience, and we must align that knowledge and understanding with technological solutions. When this works — when the front office and back office come together in this way — it can lead to solutions that a company could otherwise never have realized.

There’s different qualities, of course, between front office and back office requirements. The back office is the foundation of a company and requires robustness, stability, and reliability. The front office, on the other hand, moves much more quickly. It is always changing with new product offerings and marketing campaigns. Technology must also show agility, flexibility, and speed. The business needs both functions to survive. This is a challenge for IT organizations, but it is not an impossible shift for us to make.

Here’s the breakdown of our challenge.

1. We need to better understand the real needs of the business.

This means learning more about the experience and needs of the end customer and then translating that information into technological solutions.

2. We need to be involved in more of the strategic discussions of the business.

Use the regular invitations to meetings with business as an opportunity to surface the deeper learning about the end consumer and the technology solutions that business may otherwise not know to ask for or how to implement.

The IT industry overall may not have a track record of operating in this way, but if we are not involved in the strategic direction of companies and shedding light on the future path, we risk not being considered innovation partners for the business.

We must collaborate with business, understand the strategic direction and highlight the technical challenges and opportunities. When we do, IT will become a hybrid organization – able to maintain the back office while capitalizing on the front office’s growing technical needs. We will highlight solutions that business could otherwise have missed, ushering in a digital transformation.

Digital transformation goes beyond just technology; it requires a mindset. See What It Really Means To Be A Digital Organization.

This story originally appeared on SAP Business Trends.

Top image via Shutterstock

Comments

Sam Yen

About Sam Yen

Sam Yen is the Chief Design Officer for SAP and the Managing Director of SAP Labs Silicon Valley. He is focused on driving a renewed commitment to design and user experience at SAP. Under his leadership, SAP further strengthens its mission of listening to customers´ needs leading to tangible results, including SAP Fiori, SAP Screen Personas and SAP´s UX design services.

How Productive Could You Be With 45 Minutes More Per Day?

Michael Rander

Chances are that you are already feeling your fair share of organizational complexity when navigating your current company, but have you ever considered just how much time is spent across all companies on managing complexity? According to a recent study by the Economist Intelligence Unit (EIU), the global impact of complexity is mind-blowing – and not in a good way.

The study revealed that 38% of respondents spent 16%-25% of their time just dealing with organizational complexity, and 17% spent a staggering 26%-50% of their time doing so. To put that into more concrete numbers, in the US alone, if executives could cut their time spent managing complexity in half, an estimated 8.6 million hours could be saved a week. That corresponds to 45 minutes per executive per day.

The potential productivity impact of every executive having 45 minutes more to work every single day is clearly significant, and considering that 55% say that their organization is either very or extremely complex, why are we then not making the reduction of complexity one or our top of mind issues?

The problem is that identifying the sources of complexity is complex in of itself. Key sources of complexity include organizational size, executive priorities, pace of innovation, decision-making processes, vastly increasing amounts of data to manage, organizational structures, and the pure culture of the company. As a consequence, answers are not universal by any means.

That being said, the negative productivity impact of complexity, regardless of the specific source, is felt similarly across a very large segment of the respondents, with 55% stating that complexity has taken a direct toll on profitability over the past three years.  This is such a serious problem that 8% of respondents actually slowed down their company growth in order to deal with complexity.

So, if complexity oftentimes impacts productivity and subsequently profitability, what are some of the more successful initiatives that companies are taking to combat these effects? Among the answers from the EIU survey, the following were highlighted among the most likely initiatives to reduce complexity and ultimately increase productivity:

  • Making it a company-wide goal to reduce complexity means that the executive level has to live and breathe simplification in order for the rest of the organization to get behind it. Changing behaviors across the organization requires strong leadership, commitment, and change management, and these initiatives ultimately lead to improved decision-making processes, which was reported by respondents as the top benefit of reducing complexity. From a leadership perspective this also requires setting appropriate metrics for measuring outcomes, and for metrics, productivity and efficiency were by far the most popular choices amongst respondents though strangely collaboration related metrics where not ranking high in spite of collaboration being a high level priority.
  • Promoting a culture of collaboration means enabling employees and management alike to collaborate not only within their teams but also across the organization, with partners, and with customers. Creating cross-functional roles to facilitate collaboration was cited by 56% as the most helpful strategy in achieving this goal.
  • More than half (54%) of respondents found the implementation of new technology and tools to be a successful step towards reducing complexity and improving productivity. Enabling collaboration, reducing information overload, building scenarios and prognoses, and enabling real-time decision-making are all key issues that technology can help to reduce complexity at all levels of the organization.

While these initiatives won’t help everyone, it is interesting to see that more than half of companies believe that if they could cut complexity in half they could be at least 11%-25% more productive. That nearly one in five respondents indicated that they could be 26%-50% more productive is a massive improvement.

The question then becomes whether we can make complexity and its impact on productivity not only more visible as a key issue for companies to address, but (even more importantly) also something that every company and every employee should be actively working to reduce. The potential productivity gains listed by respondents certainly provide food for thought, and few other corporate activities are likely to gain that level of ROI.

Just imagine having 45 minutes each and every day for actively pursuing new projects, getting innovative, collaborating, mentoring, learning, reducing stress, etc. What would you do? The vision is certainly compelling, and the question is are we as companies, leaders, and employees going to do something about it?

To read more about the EIU study, please see:

Feel free to follow me on Twitter: @michaelrander

Comments

Michael Rander

About Michael Rander

Michael Rander is the Global Research Director for Future Of Work at SAP. He is an experienced project manager, strategic and competitive market researcher, operations manager as well as an avid photographer, athlete, traveler and entrepreneur. Share your thoughts with Michael on Twitter @michaelrander.

More Than Noise: Digital Trends That Are Bigger Than You Think

By Maurizio Cattaneo, David Delaney, Volker Hildebrand, and Neal Ungerleider

In the tech world in 2017, several trends emerged as signals amid the noise, signifying much larger changes to come.

As we noted in last year’s More Than Noise list, things are changing—and the changes are occurring in ways that don’t necessarily fit into the prevailing narrative.

While many of 2017’s signals have a dark tint to them, perhaps reflecting the times we live in, we have sought out some rays of light to illuminate the way forward. The following signals differ considerably, but understanding them can help guide businesses in the right direction for 2018 and beyond.

When a team of psychologists, linguists, and software engineers created Woebot, an AI chatbot that helps people learn cognitive behavioral therapy techniques for managing mental health issues like anxiety and depression, they did something unusual, at least when it comes to chatbots: they submitted it for peer review.

Stanford University researchers recruited a sample group of 70 college-age participants on social media to take part in a randomized control study of Woebot. The researchers found that their creation was useful for improving anxiety and depression symptoms. A study of the user interaction with the bot was submitted for peer review and published in the Journal of Medical Internet Research Mental Health in June 2017.

While Woebot may not revolutionize the field of psychology, it could change the way we view AI development. Well-known figures such as Elon Musk and Bill Gates have expressed concerns that artificial intelligence is essentially ungovernable. Peer review, such as with the Stanford study, is one way to approach this challenge and figure out how to properly evaluate and find a place for these software programs.

The healthcare community could be onto something. We’ve already seen instances where AI chatbots have spun out of control, such as when internet trolls trained Microsoft’s Tay to become a hate-spewing misanthrope. Bots are only as good as their design; making sure they stay on message and don’t act in unexpected ways is crucial.

This is especially true in healthcare. When chatbots are offering therapeutic services, they must be properly designed, vetted, and tested to maintain patient safety.

It may be prudent to apply the same level of caution to a business setting. By treating chatbots as if they’re akin to medicine or drugs, we have a model for thorough vetting that, while not perfect, is generally effective and time tested.

It may seem like overkill to think of chatbots that manage pizza orders or help resolve parking tickets as potential health threats. But it’s already clear that AI can have unintended side effects that could extend far beyond Tay’s loathsome behavior.

For example, in July, Facebook shut down an experiment where it challenged two AIs to negotiate with each other over a trade. When the experiment began, the two chatbots quickly went rogue, developing linguistic shortcuts to reduce negotiating time and leaving their creators unable to understand what they were saying.

Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?

The implications are chilling. Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?

In this context, the healthcare community’s conservative approach doesn’t seem so farfetched. Woebot could ultimately become an example of the kind of oversight that’s needed for all AIs.

Meanwhile, it’s clear that chatbots have great potential in healthcare—not just for treating mental health issues but for helping patients understand symptoms, build treatment regimens, and more. They could also help unclog barriers to healthcare, which is plagued worldwide by high prices, long wait times, and other challenges. While they are not a substitute for actual humans, chatbots can be used by anyone with a computer or smartphone, 24 hours a day, seven days a week, regardless of financial status.

Finding the right governance for AI development won’t happen overnight. But peer review, extensive internal quality analysis, and other processes will go a long way to ensuring bots function as expected. Otherwise, companies and their customers could pay a big price.

Elon Musk is an expert at dominating the news cycle with his sci-fi premonitions about space travel and high-speed hyperloops. However, he captured media attention in Australia in April 2017 for something much more down to earth: how to deal with blackouts and power outages.

In 2016, a massive blackout hit the state of South Australia following a storm. Although power was restored quickly in Adelaide, the capital, people in the wide stretches of arid desert that surround it spent days waiting for the power to return. That hit South Australia’s wine and livestock industries especially hard.

South Australia’s electrical grid currently gets more than half of its energy from wind and solar, with coal and gas plants acting as backups for when the sun hides or the wind doesn’t blow, according to ABC News Australia. But this network is vulnerable to sudden loss of generation—which is exactly what happened in the storm that caused the 2016 blackout, when tornadoes ripped through some key transmission lines. Getting the system back on stable footing has been an issue ever since.

Displaying his usual talent for showmanship, Musk stepped in and promised to build the world’s largest battery to store backup energy for the network—and he pledged to complete it within 100 days of signing the contract or the battery would be free. Pen met paper with South Australia and French utility Neoen in September. As of press time in November, construction was underway.

For South Australia, the Tesla deal offers an easy and secure way to store renewable energy. Tesla’s 129 MWh battery will be the most powerful battery system in the world by 60% once completed, according to Gizmodo. The battery, which is stationed at a wind farm, will cover temporary drops in wind power and kick in to help conventional gas and coal plants balance generation with demand across the network. South Australian citizens and politicians largely support the project, which Tesla claims will be able to power 30,000 homes.

Until Musk made his bold promise, batteries did not figure much in renewable energy networks, mostly because they just aren’t that good. They have limited charges, are difficult to build, and are difficult to manage. Utilities also worry about relying on the same lithium-ion battery technology as cellphone makers like Samsung, whose Galaxy Note 7 had to be recalled in 2016 after some defective batteries burst into flames, according to CNET.

However, when made right, the batteries are safe. It’s just that they’ve traditionally been too expensive for large-scale uses such as renewable power storage. But battery innovations such as Tesla’s could radically change how we power the economy. According to a study that appeared this year in Nature, the continued drop in the cost of battery storage has made renewable energy price-competitive with traditional fossil fuels.

This is a massive shift. Or, as David Roberts of news site Vox puts it, “Batteries are soon going to disrupt power markets at all scales.” Furthermore, if the cost of batteries continues to drop, supply chains could experience radical energy cost savings. This could disrupt energy utilities, manufacturing, transportation, and construction, to name just a few, and create many opportunities while changing established business models. (For more on how renewable energy will affect business, read the feature “Tick Tock” in this issue.)

Battery research and development has become big business. Thanks to electric cars and powerful smartphones, there has been incredible pressure to make more powerful batteries that last longer between charges.

The proof of this is in the R&D funding pudding. A Brookings Institution report notes that both the Chinese and U.S. governments offer generous subsidies for lithium-ion battery advancement. Automakers such as Daimler and BMW have established divisions marketing residential and commercial energy storage products. Boeing, Airbus, Rolls-Royce, and General Electric are all experimenting with various electric propulsion systems for aircraft—which means that hybrid airplanes are also a possibility.

Meanwhile, governments around the world are accelerating battery research investment by banning internal combustion vehicles. Britain, France, India, and Norway are seeking to go all electric as early as 2025 and by 2040 at the latest.

In the meantime, expect huge investment and new battery innovation from interested parties across industries that all share a stake in the outcome. This past September, for example, Volkswagen announced a €50 billion research investment in batteries to help bring 300 electric vehicle models to market by 2030.

At first, it sounds like a narrative device from a science fiction novel or a particularly bad urban legend.

Powerful cameras in several Chinese cities capture photographs of jaywalkers as they cross the street and, several minutes later, display their photograph, name, and home address on a large screen posted at the intersection. Several days later, a summons appears in the offender’s mailbox demanding payment of a fine or fulfillment of community service.

As Orwellian as it seems, this technology is very real for residents of Jinan and several other Chinese cities. According to a Xinhua interview with Li Yong of the Jinan traffic police, “Since the new technology has been adopted, the cases of jaywalking have been reduced from 200 to 20 each day at the major intersection of Jingshi and Shungeng roads.”

The sophisticated cameras and facial recognition systems already used in China—and their near–real-time public shaming—are an example of how machine learning, mobile phone surveillance, and internet activity tracking are being used to censor and control populations. Most worryingly, the prospect of real-time surveillance makes running surveillance states such as the former East Germany and current North Korea much more financially efficient.

According to a 2015 discussion paper by the Institute for the Study of Labor, a German research center, by the 1980s almost 0.5% of the East German population was directly employed by the Stasi, the country’s state security service and secret police—1 for every 166 citizens. An additional 1.1% of the population (1 for every 66 citizens) were working as unofficial informers, which represented a massive economic drain. Automated, real-time, algorithm-driven monitoring could potentially drive the cost of controlling the population down substantially in police states—and elsewhere.

We could see a radical new era of censorship that is much more manipulative than anything that has come before. Previously, dissidents were identified when investigators manually combed through photos, read writings, or listened in on phone calls. Real-time algorithmic monitoring means that acts of perceived defiance can be identified and deleted in the moment and their perpetrators marked for swift judgment before they can make an impression on others.

Businesses need to be aware of the wider trend toward real-time, automated censorship and how it might be used in both commercial and governmental settings. These tools can easily be used in countries with unstable political dynamics and could become a real concern for businesses that operate across borders. Businesses must learn to educate and protect employees when technology can censor and punish in real time.

Indeed, the technologies used for this kind of repression could be easily adapted from those that have already been developed for businesses. For instance, both Facebook and Google use near–real-time facial identification algorithms that automatically identify people in images uploaded by users—which helps the companies build out their social graphs and target users with profitable advertisements. Automated algorithms also flag Facebook posts that potentially violate the company’s terms of service.

China is already using these technologies to control its own people in ways that are largely hidden to outsiders.

According to a report by the University of Toronto’s Citizen Lab, the popular Chinese social network WeChat operates under a policy its authors call “One App, Two Systems.” Users with Chinese phone numbers are subjected to dynamic keyword censorship that changes depending on current events and whether a user is in a private chat or in a group. Depending on the political winds, users are blocked from accessing a range of websites that report critically on China through WeChat’s internal browser. Non-Chinese users, however, are not subject to any of these restrictions.

The censorship is also designed to be invisible. Messages are blocked without any user notification, and China has intermittently blocked WhatsApp and other foreign social networks. As a result, Chinese users are steered toward national social networks, which are more compliant with government pressure.

China’s policies play into a larger global trend: the nationalization of the internet. China, Russia, the European Union, and the United States have all adopted different approaches to censorship, user privacy, and surveillance. Although there are social networks such as WeChat or Russia’s VKontakte that are popular in primarily one country, nationalizing the internet challenges users of multinational services such as Facebook and YouTube. These different approaches, which impact everything from data safe harbor laws to legal consequences for posting inflammatory material, have implications for businesses working in multiple countries, as well.

For instance, Twitter is legally obligated to hide Nazi and neo-fascist imagery and some tweets in Germany and France—but not elsewhere. YouTube was officially banned in Turkey for two years because of videos a Turkish court deemed “insulting to the memory of Mustafa Kemal Atatürk,” father of modern Turkey. In Russia, Google must keep Russian users’ personal data on servers located inside Russia to comply with government policy.

While China is a pioneer in the field of instant censorship, tech companies in the United States are matching China’s progress, which could potentially have a chilling effect on democracy. In 2016, Apple applied for a patent on technology that censors audio streams in real time—automating the previously manual process of censoring curse words in streaming audio.

In March, after U.S. President Donald Trump told Fox News, “I think maybe I wouldn’t be [president] if it wasn’t for Twitter,” Twitter founder Evan “Ev” Williams did something highly unusual for the creator of a massive social network.

He apologized.

Speaking with David Streitfeld of The New York Times, Williams said, “It’s a very bad thing, Twitter’s role in that. If it’s true that he wouldn’t be president if it weren’t for Twitter, then yeah, I’m sorry.”

Entrepreneurs tend to be very proud of their innovations. Williams, however, offers a far more ambivalent response to his creation’s success. Much of the 2016 presidential election’s rancor was fueled by Twitter, and the instant gratification of Twitter attracts trolls, bullies, and bigots just as easily as it attracts politicians, celebrities, comedians, and sports fans.

Services such as Twitter, Facebook, YouTube, and Instagram are designed through a mix of look and feel, algorithmic wizardry, and psychological techniques to hang on to users for as long as possible—which helps the services sell more advertisements and make more money. Toxic political discourse and online harassment are unintended side effects of the economic-driven urge to keep users engaged no matter what.

Keeping users’ eyeballs on their screens requires endless hours of multivariate testing, user research, and algorithm refinement. For instance, Casey Newton of tech publication The Verge notes that Google Brain, Google’s AI division, plays a key part in generating YouTube’s video recommendations.

According to Jim McFadden, the technical lead for YouTube recommendations, “Before, if I watch this video from a comedian, our recommendations were pretty good at saying, here’s another one just like it,” he told Newton. “But the Google Brain model figures out other comedians who are similar but not exactly the same—even more adjacent relationships. It’s able to see patterns that are less obvious.”

A never-ending flow of content that is interesting without being repetitive is harder to resist. With users glued to online services, addiction and other behavioral problems occur to an unhealthy degree. According to a 2016 poll by nonprofit research company Common Sense Media, 50% of American teenagers believe they are addicted to their smartphones.

This pattern is extending into the workplace. Seventy-five percent of companies told research company Harris Poll in 2016 that two or more hours a day are lost in productivity because employees are distracted. The number one reason? Cellphones and texting, according to 55% of those companies surveyed. Another 41% pointed to the internet.

Tristan Harris, a former design ethicist at Google, argues that many product designers for online services try to exploit psychological vulnerabilities in a bid to keep users engaged for longer periods. Harris refers to an iPhone as “a slot machine in my pocket” and argues that user interface (UI) and user experience (UX) designers need to adopt something akin to a Hippocratic Oath to stop exploiting users’ psychological vulnerabilities.

In fact, there is an entire school of study devoted to “dark UX”—small design tweaks to increase profits. These can be as innocuous as a “Buy Now” button in a visually pleasing color or as controversial as when Facebook tweaked its algorithm in 2012 to show a randomly selected group of almost 700,000 users (who had not given their permission) newsfeeds that skewed more positive to some users and more negative to others to gauge the impact on their respective emotional states, according to an article in Wired.

As computers, smartphones, and televisions come ever closer to convergence, these issues matter increasingly to businesses. Some of the universal side effects of addiction are lost productivity at work and poor health. Businesses should offer training and help for employees who can’t stop checking their smartphones.

Mindfulness-centered mobile apps such as Headspace, Calm, and Forest offer one way to break the habit. Users can also choose to break internet addiction by going for a walk, turning their computers off, or using tools like StayFocusd or Freedom to block addictive websites or apps.

Most importantly, companies in the business of creating tech products need to design software and hardware that discourages addictive behavior. This means avoiding bad designs that emphasize engagement metrics over human health. A world of advertising preroll showing up on smart refrigerator touchscreens at 2 a.m. benefits no one.

According to a 2014 study in Cyberpsychology, Behavior and Social Networking, approximately 6% of the world’s population suffers from internet addiction to one degree or another. As more users in emerging economies gain access to cheap data, smartphones, and laptops, that percentage will only increase. For businesses, getting a head start on stopping internet addiction will make employees happier and more productive. D!


About the Authors

Maurizio Cattaneo is Director, Delivery Execution, Energy, and Natural Resources, at SAP.

David Delaney is Global Vice President and Chief Medical Officer, SAP Health.

Volker Hildebrand is Global Vice President for SAP Hybris solutions.

Neal Ungerleider is a Los Angeles-based technology journalist and consultant.


Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Tags:

Death Of An IT Salesman

Jesper Schleimann

As software shifts from supporting the strategy to becoming the strategy of most companies, the relationship and even the sales process between the vendor side and the customer side in the IT industry is subsequently also undergoing some remarkable changes. The traditional IT salesman is an endangered species.

I recently had the pleasure of participating in a workshop with one of Scandinavia’s largest companies to create new business models in the company’s operations business area. As an IT vendor, we worked with the customer in an open process using the design thinking methodology—a creative process in which we jointly visualized, defined, and solidified how new flows of data can change business processes and their business models.

By working with “personas” relevant to their business, we could better understand how technology can help different roles in the involved departments deliver their contributions faster and more efficiently. The scope was completely open. We put our knowledge and experience with technological opportunities in parallel with the company’s own knowledge of the market, processes, and business.

The results may trigger a sale of software from our side at a point, but we do not know exactly which solution—or even if it will happen. What we did do was innovate together and better understand our customer’s future and viable routes to success. Such is the reality of the strategic work of digitizing here on the verge of year 2018.

Solution selling is not enough

In my view, the transgressive nature of technology is radically changing the way businesses and the sales process works. The IT industry—at least parts of it—must focus on completely different types of collaboration with the customer.

Historically, the sales process has already realized major changes. In the past, you’d find a product-fixated “used-car-sales” approach, which identified the characteristics of the box or solution and left it to the customer to find the hole in the cheese. Since then, a generation of IT key account managers learned “solution selling,” with a sharp focus on finding and defining a “pain point” at the customer and then position the solution against this. But today, even that approach falls short.

Endangered species

The challenge is that software solutions now support the formation of new, yet unknown business models. They transverse processes and do not respect silo borders within organizations. Consequently, businesses struggle to define a clear operational road. Top management faces a much broader search of potential for innovation. The creation of a compelling vision itself requires a continuous and comprehensive study of what digitization can do for the value chain and for the company’s ecosystem.

Vendors abandon their customers if they are too busy selling different tools and platforms without entering into a committed partnership to create the new business model. Therefore, the traditional IT salesperson, preoccupied with their own goals, is becoming an endangered species. The customer-driven process requires even key account managers to dig deep and endeavor to understand the customer’s business. The best in the IT industry will move closer to the role of trusted adviser, mastering the required capabilities and accepting the risks and rewards that follow.

Leaving the comfort zone

This obviously has major consequences for the sales culture in the IT industry. Reward mechanisms and incentive structures need to be reconsidered toward a more behavioral incentive. And the individual IT salesperson is going on a personal journey, as the end goal is no longer to close an order, but to create visions and deliver value in partnership with the customer and to do so in an ever-changing context, where the future is volatile and unpredictable.

A key account manager is the customer’s traveling companion. Do not expect to be able to reduce complexity and stay in your comfort zone and not be affected by this change. Vendors should think bigger, and as an IT salesperson, you need to show your ability for transformational thinking. Everyone must be prepared to take the first baby steps, but there will definitely also be some who cannot handle the change. Disruption is not just something you, as a vendor, deliver to a customer. The noble art of being a digital vendor is facing some serious earthquakes.

For more on how tech innovation is disrupting traditional business models, see Why You Should Consider Disrupting Your Own Business.

Comments

Jesper Schleimann

About Jesper Schleimann

Chief Technology Officer, Nordic & Baltic region

In his role as Nordic CTO, Jesper’s mission is to help customers unlock their business potential by simplifying their digital transformation. Jesper has a Cand.polit. from the University of Copenhagen as well as an Executive MBA from Copenhagen Business School.