Can Technology Replace Human Interpreters?

Simon Davies

Over the few past years, the demand for real-time interpretation services has increased considerably. The globalisation of business can be considered a huge contributing factor for this phenomenon, as it has increased the opportunities for international trade and opened new markets for businesses all around the world.

In order to be competitive and keep up with this increase in demand for interpreting services, developers have been working on technological solutions to meet the requirements for high-quality simultaneous interpretations, but can tech really replace humans when it comes to interpreting?

Advances in interpreting and translation tech

Real-time translation systems include applications that can be installed on smartphones, computers, or other gadgets linked to the Internet. The words of the speaker are transcribed by a computer server, which analyses the content and selects the closest translation from a vast collection of phrase pairs in its database.

There are a few machine interpreting solutions on the market already, with Israeli startup Lexifone having launched a telephone-based service in 2013. The Nara Institute of Science and Technology’s translation app, VoiceTra, currently covers 27 languages for text. For speech, it is said to be “good enough to make understandable 90 percent of what you want to say.” Researchers from the Institute are now understood to be working on a lag-free interpreting system for the 2020 Tokyo Olympics, which will reportedly transpose the games’ Japanese commentary in real-time.

Despite their growing popularity, apps and services such as these have come to criticism for their inability to accurately infer the meaning of what is being said. Humans often use context to determine the meaning of words, and consider how individual words interact with each other. These combinations are in constant change owing to evolving human creativity.

The latest technologies offer the most viable solution yet. UK-based startup Mymanu is deploying “smart” earbuds to make conversations in multiple languages easier with the Clik.

Clik earbuds contain a microphone and microprocessor that does the “brain” work, and promises to translate 37 different languages in real time. The ear bud analyses an entire sentence in order to “understand” the context of what is being said and issue an appropriate interpretation. It’s understood the maximum waiting time for translation is 5-10 seconds.

It’s similar to the tech used in text-based translations, which is now capable of “learning” how best to issue translations relevant to context: In September 2016, Google Translate switched from Phrase-Based Machine Translation (PBMT) to Google Neural Machine Translation (GNMT), a new AI technology that saves information about the meaning of phrases, rather than just direct phrase translations.

“Poetry is what’s lost in translation”—Robert Frost

While online translation software have come a long way over the years, it is still flawed. Just recently, Google corrected a few slightly comical errors:

  • The term Russian Federation was being translated into Mordor—yes, that place from Lord of the Rings
  • Russians was regularly translated as occupiers
  • Sergey Larvror was translated into sad little horse

It’s not the technology’s fault that this happens; owing to the complexity of words and their meaning in different contexts, figurative and metaphorical translations are accurate from time to time. But no matter how advanced the computer algorithm, it cannot replicate instinct.

As professional translation service providers London Translations explain, “A good interpreter has to have an in-depth, up-to-date understanding of a language’s quirks, nuances, and colloquialisms, as well as the way its speakers prefer to conduct business.” Interpreters only ever market their services translating into their native language for this reason. Translation, after all, is about more than just the language you use; it’s about culture too.

It’s also important to remember that not all communication is verbal. By picking up on conscious and subconscious gestures and expressions, a talented interpreting professional brings a level of mutual understanding to a multilingual dialogue to which tech cannot (yet) compare.

It’s also important to consider that real-time machine interpreting solutions are not only required to transpose speech from one language to another, but also to provide a verbal output. Though speech synthesis is far better now than it was a few years ago, it often falls short of acceptable standards in pronunciation, tone of voice and, ultimately, tact.  In the business world, this can be make-or-break.

Still, tech giants are trying to address these concerns, with Microsoft recently updating Translator to allow group chat conversations, and boasting an improved comprehension of colloquialisms. Google has recently upped its game too, switching to what it calls Neural Machine Translation—an artificial intelligence that memorises information about the meaning of phrases instead of mere phrase translations.

But there are between 6000 and 7000 languages in the world today, of which about 1000 have some economic significance. It means that in order for translation technology take over humans, the technology would need to develop all these languages. Bearing in mind that Google Translate supports only about 80 languages, there is still a very long road ahead.

While translation technology no longer fails as often as it used to and may eventually replace translators for the more mundane (or less nuanced) tasks where “good enough” is good enough, the tech still seems to be lacking the human element.

Can technology replace human interpreters?

Nevertheless, with all these technological advances, interpreters’ jobs will inevitably evolve, just as they have already—The Nuremberg Trails are generally considered to have been the event that changed interpretation forever. Before The Nuremberg Trials, any kind of interpretation was done consecutively—talk first, and then wait for the interpreter to translate. In 1945, for the first time, interpretations were performed consecutively using a system of microphones and headsets to transmit the cacophony of languages.

But the responsibility of interpreting upsetting content proved difficult for some. As Nuremberg interpreter Siegfried Ramler recalls of a courtroom colleague, “When a word came up that she could not bring herself to pronounce, because it was so vulgar.” Not wanting to pronounce it in open courtroom, “she stopped, she just wouldn’t do it… I took the microphone and used that word, in fact I made it worse.”

Since then, audio-visual communications infrastructure and conference technology have allowed for the increased anonymity of interpreting agents and again revolutionized the proficiency of interpretation services in such environments as hospitals, the courtroom, and international political arenas such as the European Parliament. Now, more advanced tech can help facilitate an improved human interpretation service.

In 2008, Livescribe launched its first “smart pen,” which featured an infrared camera just below the writing tip to record the movements of the pen, and a built-in microphone to pick up ambient sound. Handwritten notes are then synchronised with the sound recordings using a digital time signature for playback on demand. The Smartpen offers a safety net of sound recording in consecutive interpretation settings where accuracy is key, like healthcare and the justice system.

From a didactic standpoint, the decisions and ethical dilemmas interpreters face on a daily basis are countless and the potential for disagreement regarding those decisions is great. Technology Mediated Dispute Resolution (TMDR) processes can be particularly useful when misunderstandings and conflicts arise. It’s also thanks to tech that all work is documented and thus available for follow-up and review.

So technology-assisted interpreting is more and more welcome. In its simplest application, smartphones, tablets, and online dictionaries are being put to good use, described by some as an “infallible information butler” if personal knowledge comes up short.

Leading providers adopt technological solutions when the time is right in order to gain a competitive advantage. Of course, machine interpretation is a fledgling technology, and there’s no saying what the next wave of innovation will bring. For now, though, it’s probably safe to say human interpreters are irreplaceable.

For more insight on developing technology and its practical applications, see Machine Learning: The Real Business Intelligence.

About Simon Davies

Simon Davies is a London-based freelance writer with an interest in startup culture, issues, and solutions. He works explores new markets and disruptive technologies and communicates those recent developments to a wide, public audience. Simon is also a contributor at,, and Follow Simon @simontheodavies on Twitter.