Can Technology Replace Human Interpreters?

Simon Davies

Over the few past years, the demand for real-time interpretation services has increased considerably. The globalisation of business can be considered a huge contributing factor for this phenomenon, as it has increased the opportunities for international trade and opened new markets for businesses all around the world.

In order to be competitive and keep up with this increase in demand for interpreting services, developers have been working on technological solutions to meet the requirements for high-quality simultaneous interpretations, but can tech really replace humans when it comes to interpreting?

Advances in interpreting and translation tech

Real-time translation systems include applications that can be installed on smartphones, computers, or other gadgets linked to the Internet. The words of the speaker are transcribed by a computer server, which analyses the content and selects the closest translation from a vast collection of phrase pairs in its database.

There are a few machine interpreting solutions on the market already, with Israeli startup Lexifone having launched a telephone-based service in 2013. The Nara Institute of Science and Technology’s translation app, VoiceTra, currently covers 27 languages for text. For speech, it is said to be “good enough to make understandable 90 percent of what you want to say.” Researchers from the Institute are now understood to be working on a lag-free interpreting system for the 2020 Tokyo Olympics, which will reportedly transpose the games’ Japanese commentary in real-time.

Despite their growing popularity, apps and services such as these have come to criticism for their inability to accurately infer the meaning of what is being said. Humans often use context to determine the meaning of words, and consider how individual words interact with each other. These combinations are in constant change owing to evolving human creativity.

The latest technologies offer the most viable solution yet. UK-based startup Mymanu is deploying “smart” earbuds to make conversations in multiple languages easier with the Clik.

Clik earbuds contain a microphone and microprocessor that does the “brain” work, and promises to translate 37 different languages in real time. The ear bud analyses an entire sentence in order to “understand” the context of what is being said and issue an appropriate interpretation. It’s understood the maximum waiting time for translation is 5-10 seconds.

It’s similar to the tech used in text-based translations, which is now capable of “learning” how best to issue translations relevant to context: In September 2016, Google Translate switched from Phrase-Based Machine Translation (PBMT) to Google Neural Machine Translation (GNMT), a new AI technology that saves information about the meaning of phrases, rather than just direct phrase translations.

“Poetry is what’s lost in translation”—Robert Frost

While online translation software have come a long way over the years, it is still flawed. Just recently, Google corrected a few slightly comical errors:

  • The term Russian Federation was being translated into Mordor—yes, that place from Lord of the Rings
  • Russians was regularly translated as occupiers
  • Sergey Larvror was translated into sad little horse

It’s not the technology’s fault that this happens; owing to the complexity of words and their meaning in different contexts, figurative and metaphorical translations are accurate from time to time. But no matter how advanced the computer algorithm, it cannot replicate instinct.

As professional translation service providers London Translations explain, “A good interpreter has to have an in-depth, up-to-date understanding of a language’s quirks, nuances, and colloquialisms, as well as the way its speakers prefer to conduct business.” Interpreters only ever market their services translating into their native language for this reason. Translation, after all, is about more than just the language you use; it’s about culture too.

It’s also important to remember that not all communication is verbal. By picking up on conscious and subconscious gestures and expressions, a talented interpreting professional brings a level of mutual understanding to a multilingual dialogue to which tech cannot (yet) compare.

It’s also important to consider that real-time machine interpreting solutions are not only required to transpose speech from one language to another, but also to provide a verbal output. Though speech synthesis is far better now than it was a few years ago, it often falls short of acceptable standards in pronunciation, tone of voice and, ultimately, tact.  In the business world, this can be make-or-break.

Still, tech giants are trying to address these concerns, with Microsoft recently updating Translator to allow group chat conversations, and boasting an improved comprehension of colloquialisms. Google has recently upped its game too, switching to what it calls Neural Machine Translation—an artificial intelligence that memorises information about the meaning of phrases instead of mere phrase translations.

But there are between 6000 and 7000 languages in the world today, of which about 1000 have some economic significance. It means that in order for translation technology take over humans, the technology would need to develop all these languages. Bearing in mind that Google Translate supports only about 80 languages, there is still a very long road ahead.

While translation technology no longer fails as often as it used to and may eventually replace translators for the more mundane (or less nuanced) tasks where “good enough” is good enough, the tech still seems to be lacking the human element.

Can technology replace human interpreters?

Nevertheless, with all these technological advances, interpreters’ jobs will inevitably evolve, just as they have already—The Nuremberg Trails are generally considered to have been the event that changed interpretation forever. Before The Nuremberg Trials, any kind of interpretation was done consecutively—talk first, and then wait for the interpreter to translate. In 1945, for the first time, interpretations were performed consecutively using a system of microphones and headsets to transmit the cacophony of languages.

But the responsibility of interpreting upsetting content proved difficult for some. As Nuremberg interpreter Siegfried Ramler recalls of a courtroom colleague, “When a word came up that she could not bring herself to pronounce, because it was so vulgar.” Not wanting to pronounce it in open courtroom, “she stopped, she just wouldn’t do it… I took the microphone and used that word, in fact I made it worse.”

Since then, audio-visual communications infrastructure and conference technology have allowed for the increased anonymity of interpreting agents and again revolutionized the proficiency of interpretation services in such environments as hospitals, the courtroom, and international political arenas such as the European Parliament. Now, more advanced tech can help facilitate an improved human interpretation service.

In 2008, Livescribe launched its first “smart pen,” which featured an infrared camera just below the writing tip to record the movements of the pen, and a built-in microphone to pick up ambient sound. Handwritten notes are then synchronised with the sound recordings using a digital time signature for playback on demand. The Smartpen offers a safety net of sound recording in consecutive interpretation settings where accuracy is key, like healthcare and the justice system.

From a didactic standpoint, the decisions and ethical dilemmas interpreters face on a daily basis are countless and the potential for disagreement regarding those decisions is great. Technology Mediated Dispute Resolution (TMDR) processes can be particularly useful when misunderstandings and conflicts arise. It’s also thanks to tech that all work is documented and thus available for follow-up and review.

So technology-assisted interpreting is more and more welcome. In its simplest application, smartphones, tablets, and online dictionaries are being put to good use, described by some as an “infallible information butler” if personal knowledge comes up short.

Leading providers adopt technological solutions when the time is right in order to gain a competitive advantage. Of course, machine interpretation is a fledgling technology, and there’s no saying what the next wave of innovation will bring. For now, though, it’s probably safe to say human interpreters are irreplaceable.

For more insight on developing technology and its practical applications, see Machine Learning: The Real Business Intelligence.

Comments

About Simon Davies

Simon Davies is a London-based freelance writer with an interest in startup culture, issues, and solutions. He works explores new markets and disruptive technologies and communicates those recent developments to a wide, public audience. Simon is also a contributor at socialbarrel.com, socialnomics.net, and tech.co. Follow Simon @simontheodavies on Twitter.

Connected Cars, Autonomous Vehicles, And The IoT

Tom Raftery

Part 2 of the “Future of Transportation and the Internet of Things” series

In my last blog, I talked about the simplicity of the electric engine compared to the internal combustion engine and how this changes everything. From climate to the structure of the auto industry to the way we store, manage, and distribute energy, electric cars are having tremendous impact.

But what I left out of that discussion was the Internet of Things.

Predictive

The fact is, most electric cars are connected cars – connected through the Internet of Things. This means that sensors in the car constantly communicate with mission control (the manufacturer), sending data on the status of components in real time.

By analyzing this data, especially in context of historical data, mission control can predict component failure before it happens. For electric vehicles – with engines that already need far less repair than traditional internal combustion engines – this only increases reliability.

But what’s more, IoT-connected cars also increase convenience. For example, after realizing component failure is imminent, your car could also trigger a work order at the dealership to resolve the issue – and ensure the needed replacement part is in stock when you roll in. And if the car is autonomous, it could drive itself to be repaired while you are at work and return ready to drive you home once the repair is completed. Speaking of autonomous….

Autonomous and safe

Connectedness is also what makes autonomous vehicles possible. And while some people may distrust driverless cars, the data shows that they’re safer than the self-driven sort – at least according to a report of the U.S. National Highway Traffic Safety Administration (NHTSA).

Back in May 2016, a Tesla Model S sedan in autopilot collided with a semi-truck in Florida, killing the driver (or passenger in this case?), 40-year-old Joshua Brown. The car, apparently, crashed into the truck, passed under the trailer, and kept driving for some distance – only coming to a stop after crashing through two fences and into a pole.

As a result of this incident, NHTSA conducted an investigation resulting in a report that largely exonerated Tesla. In fact, the report says that after the introduction of autosteer – a component of the autopilot system – Tesla’s crash rate dropped by 40%.

Self-learning

The accident in question happened when the semi-truck took a left-hand turn into oncoming traffic. The reason the Tesla did not detect such a large object in its path is because it could not distinguish the white color of the trailer from the bright, white Florida sky in the background.

Reportedly, Tesla has since analyzed the crash data from this accident, identified the problem, and made fixes to the operating system on which its fleet operates. Perhaps it’s premature to declare the problem solved – but the idea at play here is an interesting one when considering the potential for connected cars and the IoT.

What this scenario shows is a learning platform in action. Because all of its cars are connected on a single platform, Tesla has access to a tremendous amount of driver data that it can analyze to continuously improve product safety. I don’t know exactly how the analysis proceeded in this particular case, but one can certainly envision the use of machine learning technology to continuously analyze patterns and introduce safety improvements on the fly, making the self-learning platform a reality.

Disruptive

A future in which autonomous vehicles are not only viable but safer than self-driven cars will result in disruptions beyond those I’ve indicated for electric engines.

Take the insurance industry, for example. With fewer accidents comes lower risk – leading to lower insurance premiums. And in a future where most cars on the road are autonomous – connected and controlled via IoT – the insurable entity will likely shift from the driver (who is now a passenger) to the operator of the network (presumably the manufacturer). Certainly, if you decide you wish to drive your car yourself, your insurance will be significantly more expensive than the insurance for an autonomous vehicle.

Of course, if autonomous cars can get where they’re going without a driver, why even bother owning a car? Why not just call up the ride when you need it, Uber style?

One result would be optimal asset utilization, where cars that are far less likely to break down can be used on an almost 24×7 basis by spreading usage across individuals. This would mean we’d need far fewer cars on the road, which would alleviate congestion. It would also hit the auto industry with dramatically lower sales volume.

And with fewer cars on the road – cars that are in use almost all the time – we’d have less use for parking. This would have tremendous impact on an industry that generates approximately $20 billion annually.

Beyond industry disruption, less need for parking would open up tremendous urban space in the form of unused lots and garages. Maybe this would mean more populous cities with room to build for more people to live more comfortably without traffic congestion. Or how about using some of the space for indoor vertical farming using hydroponics technology and LED lights to grow more food and feed more people? Of course, this is already happening. But that’s a blog for another time.

To meet the market’s expectations for increasingly fast, responsive, and personalized service, speed of business will be everything. Find out how innovative processes can enable your business to remain successful in this evolving landscape. Learn more and download the IDC paper “Realizing IoT’s Value – Connecting Things to People and Processes.”

Comments

About Tom Raftery

Tom Raftery is VP and Global Internet of Things Evangelist for SAP. Previously Tom worked as an independent analyst focussing on the Internet of Things, Energy and CleanTech. Tom has a very strong background in social media, is the former co-founder of a software firm and is co-founder and director of hyper energy-efficient data center Cork Internet eXchange. More recently, Tom worked as an Industry Analyst for RedMonk, leading their GreenMonk practice for 7 years.

Connected Environments Will Not Work Without Accurate Asset Master Data

Pamela Dunn

Manufacturers are beginning to use machine intelligence, smart sensors, and the Internet of Things (IoT) to create connected environments. There is broad consensus that transitioning to this type of advanced digital infrastructure will help improve visibility into process functions and allow algorithms and processing power to play bigger roles in optimizing the real-time health of critical assets.

“We are at the beginning of this smart machine journey,” says Dean Fitt, SAP solutions manager for enterprise asset management and plant maintenance. “People want to move from reactive maintenance to predictive maintenance. Sensors and other maintenance technologies have been around awhile, but they are being put together in new ways to transform how we maintain these environments.”

Some companies are tackling these challenges by using software, sensors, drives, and controllers to automate existing assets. This approach allows them to extend the useful life of 50-year-old hydraulic presses and hundred-year-old steam engines, for example. It also preserves more funds for situations where buying new assets is the best or only option for adding needed capabilities.

Master data management is essential for real process improvement

Being able to predict when asset maintenance is required is one of the biggest advantages offered by connected environments and IoT. But predictive analytics require both real-time data and detailed records of each facility’s as-built assets.

Ideally, this information, which includes a number of data types, would be defined as master data objects to ensure consistency across enterprise systems and processes. But capturing and standardizing data from disparate systems, digital formats, and hardcopy documents is often a low priority for project teams when they are focused on bringing new assets online.

“The master data is crucial,” says Fitt. “It is the foundation for everything. If you do not have a good foundation, you are building on quicksand.”

That is why organizations should treat master data management as a core function whenever they adopt, maintain, or automate any new or existing assets. Governance, controls, and workflows are essential for using asset data to minimize downtime, enable real-time decision-making, and increase process and worker productivity.

“Technology alone will not ensure accurate data,” says Peter Aynsley-Hartwell, chief technology officer for Utopia Global, Inc., a global data solutions company that focuses on information management. “A lot of people have information they do not trust. As soon as that happens, they begin making incorrect or poor decisions or no decisions at all. And they lose the opportunity to achieve a huge benefit from the information they have.”

Connected environments require a consistent and proactive strategy

As technology continues to evolve, manufacturing processes are likely to become more reliant on machine learning and artificial intelligence. Some manufacturers, distributors, and service companies will probably use processing, logic, and networking to continuously monitor and improve the quality and reliability of their assets.

“We may see some of these concepts make their way into our day-to-day manufacturing operations,” says Aynsley-Hartwell. “Perhaps when we have self-driving cars, they will diagnose and drive themselves to the service provider on their own initiative.”

A simple self-driving system is already in service in Australia, Aynsley-Hartwell notes. Rio Tinto, a British mining company, uses 73 416-ton trucks to haul ore along a fixed route. The vehicles are driverless and use GPS units, radars, and sensors to work 24 hours a day while saving the company 15% on overhead costs.

These technologies are evolving quickly, and numerous companies are working on making their assets more autonomous and “smart.” But none of these optimistic visions of the future will be realized without an effective strategy for acquiring and managing vast amounts of data.

Want to learn more? Listen to the SAPRadio show, “The Next Big Thing in Plan Operations: Intelligent Machines and Networks,” and check @SAPPartnerBuild on Twitter.

Comments

More Than Noise: Digital Trends That Are Bigger Than You Think

By Maurizio Cattaneo, David Delaney, Volker Hildebrand, and Neal Ungerleider

In the tech world in 2017, several trends emerged as signals amid the noise, signifying much larger changes to come.

As we noted in last year’s More Than Noise list, things are changing—and the changes are occurring in ways that don’t necessarily fit into the prevailing narrative.

While many of 2017’s signals have a dark tint to them, perhaps reflecting the times we live in, we have sought out some rays of light to illuminate the way forward. The following signals differ considerably, but understanding them can help guide businesses in the right direction for 2018 and beyond.

When a team of psychologists, linguists, and software engineers created Woebot, an AI chatbot that helps people learn cognitive behavioral therapy techniques for managing mental health issues like anxiety and depression, they did something unusual, at least when it comes to chatbots: they submitted it for peer review.

Stanford University researchers recruited a sample group of 70 college-age participants on social media to take part in a randomized control study of Woebot. The researchers found that their creation was useful for improving anxiety and depression symptoms. A study of the user interaction with the bot was submitted for peer review and published in the Journal of Medical Internet Research Mental Health in June 2017.

While Woebot may not revolutionize the field of psychology, it could change the way we view AI development. Well-known figures such as Elon Musk and Bill Gates have expressed concerns that artificial intelligence is essentially ungovernable. Peer review, such as with the Stanford study, is one way to approach this challenge and figure out how to properly evaluate and find a place for these software programs.

The healthcare community could be onto something. We’ve already seen instances where AI chatbots have spun out of control, such as when internet trolls trained Microsoft’s Tay to become a hate-spewing misanthrope. Bots are only as good as their design; making sure they stay on message and don’t act in unexpected ways is crucial.

This is especially true in healthcare. When chatbots are offering therapeutic services, they must be properly designed, vetted, and tested to maintain patient safety.

It may be prudent to apply the same level of caution to a business setting. By treating chatbots as if they’re akin to medicine or drugs, we have a model for thorough vetting that, while not perfect, is generally effective and time tested.

It may seem like overkill to think of chatbots that manage pizza orders or help resolve parking tickets as potential health threats. But it’s already clear that AI can have unintended side effects that could extend far beyond Tay’s loathsome behavior.

For example, in July, Facebook shut down an experiment where it challenged two AIs to negotiate with each other over a trade. When the experiment began, the two chatbots quickly went rogue, developing linguistic shortcuts to reduce negotiating time and leaving their creators unable to understand what they were saying.

Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?

The implications are chilling. Do we want AIs interacting in a secret language because designers didn’t fully understand what they were designing?

In this context, the healthcare community’s conservative approach doesn’t seem so farfetched. Woebot could ultimately become an example of the kind of oversight that’s needed for all AIs.

Meanwhile, it’s clear that chatbots have great potential in healthcare—not just for treating mental health issues but for helping patients understand symptoms, build treatment regimens, and more. They could also help unclog barriers to healthcare, which is plagued worldwide by high prices, long wait times, and other challenges. While they are not a substitute for actual humans, chatbots can be used by anyone with a computer or smartphone, 24 hours a day, seven days a week, regardless of financial status.

Finding the right governance for AI development won’t happen overnight. But peer review, extensive internal quality analysis, and other processes will go a long way to ensuring bots function as expected. Otherwise, companies and their customers could pay a big price.

Elon Musk is an expert at dominating the news cycle with his sci-fi premonitions about space travel and high-speed hyperloops. However, he captured media attention in Australia in April 2017 for something much more down to earth: how to deal with blackouts and power outages.

In 2016, a massive blackout hit the state of South Australia following a storm. Although power was restored quickly in Adelaide, the capital, people in the wide stretches of arid desert that surround it spent days waiting for the power to return. That hit South Australia’s wine and livestock industries especially hard.

South Australia’s electrical grid currently gets more than half of its energy from wind and solar, with coal and gas plants acting as backups for when the sun hides or the wind doesn’t blow, according to ABC News Australia. But this network is vulnerable to sudden loss of generation—which is exactly what happened in the storm that caused the 2016 blackout, when tornadoes ripped through some key transmission lines. Getting the system back on stable footing has been an issue ever since.

Displaying his usual talent for showmanship, Musk stepped in and promised to build the world’s largest battery to store backup energy for the network—and he pledged to complete it within 100 days of signing the contract or the battery would be free. Pen met paper with South Australia and French utility Neoen in September. As of press time in November, construction was underway.

For South Australia, the Tesla deal offers an easy and secure way to store renewable energy. Tesla’s 129 MWh battery will be the most powerful battery system in the world by 60% once completed, according to Gizmodo. The battery, which is stationed at a wind farm, will cover temporary drops in wind power and kick in to help conventional gas and coal plants balance generation with demand across the network. South Australian citizens and politicians largely support the project, which Tesla claims will be able to power 30,000 homes.

Until Musk made his bold promise, batteries did not figure much in renewable energy networks, mostly because they just aren’t that good. They have limited charges, are difficult to build, and are difficult to manage. Utilities also worry about relying on the same lithium-ion battery technology as cellphone makers like Samsung, whose Galaxy Note 7 had to be recalled in 2016 after some defective batteries burst into flames, according to CNET.

However, when made right, the batteries are safe. It’s just that they’ve traditionally been too expensive for large-scale uses such as renewable power storage. But battery innovations such as Tesla’s could radically change how we power the economy. According to a study that appeared this year in Nature, the continued drop in the cost of battery storage has made renewable energy price-competitive with traditional fossil fuels.

This is a massive shift. Or, as David Roberts of news site Vox puts it, “Batteries are soon going to disrupt power markets at all scales.” Furthermore, if the cost of batteries continues to drop, supply chains could experience radical energy cost savings. This could disrupt energy utilities, manufacturing, transportation, and construction, to name just a few, and create many opportunities while changing established business models. (For more on how renewable energy will affect business, read the feature “Tick Tock” in this issue.)

Battery research and development has become big business. Thanks to electric cars and powerful smartphones, there has been incredible pressure to make more powerful batteries that last longer between charges.

The proof of this is in the R&D funding pudding. A Brookings Institution report notes that both the Chinese and U.S. governments offer generous subsidies for lithium-ion battery advancement. Automakers such as Daimler and BMW have established divisions marketing residential and commercial energy storage products. Boeing, Airbus, Rolls-Royce, and General Electric are all experimenting with various electric propulsion systems for aircraft—which means that hybrid airplanes are also a possibility.

Meanwhile, governments around the world are accelerating battery research investment by banning internal combustion vehicles. Britain, France, India, and Norway are seeking to go all electric as early as 2025 and by 2040 at the latest.

In the meantime, expect huge investment and new battery innovation from interested parties across industries that all share a stake in the outcome. This past September, for example, Volkswagen announced a €50 billion research investment in batteries to help bring 300 electric vehicle models to market by 2030.

At first, it sounds like a narrative device from a science fiction novel or a particularly bad urban legend.

Powerful cameras in several Chinese cities capture photographs of jaywalkers as they cross the street and, several minutes later, display their photograph, name, and home address on a large screen posted at the intersection. Several days later, a summons appears in the offender’s mailbox demanding payment of a fine or fulfillment of community service.

As Orwellian as it seems, this technology is very real for residents of Jinan and several other Chinese cities. According to a Xinhua interview with Li Yong of the Jinan traffic police, “Since the new technology has been adopted, the cases of jaywalking have been reduced from 200 to 20 each day at the major intersection of Jingshi and Shungeng roads.”

The sophisticated cameras and facial recognition systems already used in China—and their near–real-time public shaming—are an example of how machine learning, mobile phone surveillance, and internet activity tracking are being used to censor and control populations. Most worryingly, the prospect of real-time surveillance makes running surveillance states such as the former East Germany and current North Korea much more financially efficient.

According to a 2015 discussion paper by the Institute for the Study of Labor, a German research center, by the 1980s almost 0.5% of the East German population was directly employed by the Stasi, the country’s state security service and secret police—1 for every 166 citizens. An additional 1.1% of the population (1 for every 66 citizens) were working as unofficial informers, which represented a massive economic drain. Automated, real-time, algorithm-driven monitoring could potentially drive the cost of controlling the population down substantially in police states—and elsewhere.

We could see a radical new era of censorship that is much more manipulative than anything that has come before. Previously, dissidents were identified when investigators manually combed through photos, read writings, or listened in on phone calls. Real-time algorithmic monitoring means that acts of perceived defiance can be identified and deleted in the moment and their perpetrators marked for swift judgment before they can make an impression on others.

Businesses need to be aware of the wider trend toward real-time, automated censorship and how it might be used in both commercial and governmental settings. These tools can easily be used in countries with unstable political dynamics and could become a real concern for businesses that operate across borders. Businesses must learn to educate and protect employees when technology can censor and punish in real time.

Indeed, the technologies used for this kind of repression could be easily adapted from those that have already been developed for businesses. For instance, both Facebook and Google use near–real-time facial identification algorithms that automatically identify people in images uploaded by users—which helps the companies build out their social graphs and target users with profitable advertisements. Automated algorithms also flag Facebook posts that potentially violate the company’s terms of service.

China is already using these technologies to control its own people in ways that are largely hidden to outsiders.

According to a report by the University of Toronto’s Citizen Lab, the popular Chinese social network WeChat operates under a policy its authors call “One App, Two Systems.” Users with Chinese phone numbers are subjected to dynamic keyword censorship that changes depending on current events and whether a user is in a private chat or in a group. Depending on the political winds, users are blocked from accessing a range of websites that report critically on China through WeChat’s internal browser. Non-Chinese users, however, are not subject to any of these restrictions.

The censorship is also designed to be invisible. Messages are blocked without any user notification, and China has intermittently blocked WhatsApp and other foreign social networks. As a result, Chinese users are steered toward national social networks, which are more compliant with government pressure.

China’s policies play into a larger global trend: the nationalization of the internet. China, Russia, the European Union, and the United States have all adopted different approaches to censorship, user privacy, and surveillance. Although there are social networks such as WeChat or Russia’s VKontakte that are popular in primarily one country, nationalizing the internet challenges users of multinational services such as Facebook and YouTube. These different approaches, which impact everything from data safe harbor laws to legal consequences for posting inflammatory material, have implications for businesses working in multiple countries, as well.

For instance, Twitter is legally obligated to hide Nazi and neo-fascist imagery and some tweets in Germany and France—but not elsewhere. YouTube was officially banned in Turkey for two years because of videos a Turkish court deemed “insulting to the memory of Mustafa Kemal Atatürk,” father of modern Turkey. In Russia, Google must keep Russian users’ personal data on servers located inside Russia to comply with government policy.

While China is a pioneer in the field of instant censorship, tech companies in the United States are matching China’s progress, which could potentially have a chilling effect on democracy. In 2016, Apple applied for a patent on technology that censors audio streams in real time—automating the previously manual process of censoring curse words in streaming audio.

In March, after U.S. President Donald Trump told Fox News, “I think maybe I wouldn’t be [president] if it wasn’t for Twitter,” Twitter founder Evan “Ev” Williams did something highly unusual for the creator of a massive social network.

He apologized.

Speaking with David Streitfeld of The New York Times, Williams said, “It’s a very bad thing, Twitter’s role in that. If it’s true that he wouldn’t be president if it weren’t for Twitter, then yeah, I’m sorry.”

Entrepreneurs tend to be very proud of their innovations. Williams, however, offers a far more ambivalent response to his creation’s success. Much of the 2016 presidential election’s rancor was fueled by Twitter, and the instant gratification of Twitter attracts trolls, bullies, and bigots just as easily as it attracts politicians, celebrities, comedians, and sports fans.

Services such as Twitter, Facebook, YouTube, and Instagram are designed through a mix of look and feel, algorithmic wizardry, and psychological techniques to hang on to users for as long as possible—which helps the services sell more advertisements and make more money. Toxic political discourse and online harassment are unintended side effects of the economic-driven urge to keep users engaged no matter what.

Keeping users’ eyeballs on their screens requires endless hours of multivariate testing, user research, and algorithm refinement. For instance, Casey Newton of tech publication The Verge notes that Google Brain, Google’s AI division, plays a key part in generating YouTube’s video recommendations.

According to Jim McFadden, the technical lead for YouTube recommendations, “Before, if I watch this video from a comedian, our recommendations were pretty good at saying, here’s another one just like it,” he told Newton. “But the Google Brain model figures out other comedians who are similar but not exactly the same—even more adjacent relationships. It’s able to see patterns that are less obvious.”

A never-ending flow of content that is interesting without being repetitive is harder to resist. With users glued to online services, addiction and other behavioral problems occur to an unhealthy degree. According to a 2016 poll by nonprofit research company Common Sense Media, 50% of American teenagers believe they are addicted to their smartphones.

This pattern is extending into the workplace. Seventy-five percent of companies told research company Harris Poll in 2016 that two or more hours a day are lost in productivity because employees are distracted. The number one reason? Cellphones and texting, according to 55% of those companies surveyed. Another 41% pointed to the internet.

Tristan Harris, a former design ethicist at Google, argues that many product designers for online services try to exploit psychological vulnerabilities in a bid to keep users engaged for longer periods. Harris refers to an iPhone as “a slot machine in my pocket” and argues that user interface (UI) and user experience (UX) designers need to adopt something akin to a Hippocratic Oath to stop exploiting users’ psychological vulnerabilities.

In fact, there is an entire school of study devoted to “dark UX”—small design tweaks to increase profits. These can be as innocuous as a “Buy Now” button in a visually pleasing color or as controversial as when Facebook tweaked its algorithm in 2012 to show a randomly selected group of almost 700,000 users (who had not given their permission) newsfeeds that skewed more positive to some users and more negative to others to gauge the impact on their respective emotional states, according to an article in Wired.

As computers, smartphones, and televisions come ever closer to convergence, these issues matter increasingly to businesses. Some of the universal side effects of addiction are lost productivity at work and poor health. Businesses should offer training and help for employees who can’t stop checking their smartphones.

Mindfulness-centered mobile apps such as Headspace, Calm, and Forest offer one way to break the habit. Users can also choose to break internet addiction by going for a walk, turning their computers off, or using tools like StayFocusd or Freedom to block addictive websites or apps.

Most importantly, companies in the business of creating tech products need to design software and hardware that discourages addictive behavior. This means avoiding bad designs that emphasize engagement metrics over human health. A world of advertising preroll showing up on smart refrigerator touchscreens at 2 a.m. benefits no one.

According to a 2014 study in Cyberpsychology, Behavior and Social Networking, approximately 6% of the world’s population suffers from internet addiction to one degree or another. As more users in emerging economies gain access to cheap data, smartphones, and laptops, that percentage will only increase. For businesses, getting a head start on stopping internet addiction will make employees happier and more productive. D!


About the Authors

Maurizio Cattaneo is Director, Delivery Execution, Energy, and Natural Resources, at SAP.

David Delaney is Global Vice President and Chief Medical Officer, SAP Health.

Volker Hildebrand is Global Vice President for SAP Hybris solutions.

Neal Ungerleider is a Los Angeles-based technology journalist and consultant.


Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Tags:

Death Of An IT Salesman

Jesper Schleimann

As software shifts from supporting the strategy to becoming the strategy of most companies, the relationship and even the sales process between the vendor side and the customer side in the IT industry is subsequently also undergoing some remarkable changes. The traditional IT salesman is an endangered species.

I recently had the pleasure of participating in a workshop with one of Scandinavia’s largest companies to create new business models in the company’s operations business area. As an IT vendor, we worked with the customer in an open process using the design thinking methodology—a creative process in which we jointly visualized, defined, and solidified how new flows of data can change business processes and their business models.

By working with “personas” relevant to their business, we could better understand how technology can help different roles in the involved departments deliver their contributions faster and more efficiently. The scope was completely open. We put our knowledge and experience with technological opportunities in parallel with the company’s own knowledge of the market, processes, and business.

The results may trigger a sale of software from our side at a point, but we do not know exactly which solution—or even if it will happen. What we did do was innovate together and better understand our customer’s future and viable routes to success. Such is the reality of the strategic work of digitizing here on the verge of year 2018.

Solution selling is not enough

In my view, the transgressive nature of technology is radically changing the way businesses and the sales process works. The IT industry—at least parts of it—must focus on completely different types of collaboration with the customer.

Historically, the sales process has already realized major changes. In the past, you’d find a product-fixated “used-car-sales” approach, which identified the characteristics of the box or solution and left it to the customer to find the hole in the cheese. Since then, a generation of IT key account managers learned “solution selling,” with a sharp focus on finding and defining a “pain point” at the customer and then position the solution against this. But today, even that approach falls short.

Endangered species

The challenge is that software solutions now support the formation of new, yet unknown business models. They transverse processes and do not respect silo borders within organizations. Consequently, businesses struggle to define a clear operational road. Top management faces a much broader search of potential for innovation. The creation of a compelling vision itself requires a continuous and comprehensive study of what digitization can do for the value chain and for the company’s ecosystem.

Vendors abandon their customers if they are too busy selling different tools and platforms without entering into a committed partnership to create the new business model. Therefore, the traditional IT salesperson, preoccupied with their own goals, is becoming an endangered species. The customer-driven process requires even key account managers to dig deep and endeavor to understand the customer’s business. The best in the IT industry will move closer to the role of trusted adviser, mastering the required capabilities and accepting the risks and rewards that follow.

Leaving the comfort zone

This obviously has major consequences for the sales culture in the IT industry. Reward mechanisms and incentive structures need to be reconsidered toward a more behavioral incentive. And the individual IT salesperson is going on a personal journey, as the end goal is no longer to close an order, but to create visions and deliver value in partnership with the customer and to do so in an ever-changing context, where the future is volatile and unpredictable.

A key account manager is the customer’s traveling companion. Do not expect to be able to reduce complexity and stay in your comfort zone and not be affected by this change. Vendors should think bigger, and as an IT salesperson, you need to show your ability for transformational thinking. Everyone must be prepared to take the first baby steps, but there will definitely also be some who cannot handle the change. Disruption is not just something you, as a vendor, deliver to a customer. The noble art of being a digital vendor is facing some serious earthquakes.

For more on how tech innovation is disrupting traditional business models, see Why You Should Consider Disrupting Your Own Business.

Comments

Jesper Schleimann

About Jesper Schleimann

Chief Technology Officer, Nordic & Baltic region In his role as Nordic CTO, Jesper's mission is to help customers unlock their business potential by simplifying their digital transformation. Jesper has a Cand.polit. from the University of Copenhagen as well as an Executive MBA from Copenhagen Business School.