Machine Translation technology has been around for a whole lot longer than most people would ever have guessed. It has had more ups and downs than most people in their lives as well. Although it has never been really “dead”, it has simply remained irrelevant to the vast majority of us for decades. And one might justifiably wonder why the same seems to be true for the protagonists of an industry that should have more than just polite interest in this technology: professional translators.
The first answer to this is usually: “Because the quality is too low”. And who would object to that? But why is it that the digital revolution, the rise of the internet, the triumph of big data and all the other breakthroughs of information technology have so far not been able to fix that issue? Is it just a question of time and will someone discover the Holy Grail in the next couple of months, years or decades? Chances are that we will live to hear the answer. Seriously.
In the meantime, we can look at what has happened between Machine Translation and professional translators since the technology was invented. Strikingly enough, with every new hype around Machine Translation, experts and non-experts claimed that the end of translation as a profession was near. Or – which many translators found even more threatening – that translators would benefit from Machine Translation input to a degree that would boost their productivity tenfold. As none of those predictions have yet come true, the disenchanted professionals turned to the technologies that proved to be successful in daily practice although they were much younger and less complex, namely Translation Memory systems and their many smart extra functionalities. And their productivity went through the roof, at least for updates and repetitive content. Re-use of professionally translated content started dominating the scene. The better the content was structured, formatted and managed upfront, the smaller the amount of text modules affected by changes. The smaller the amounts of text one even needed to send out for translation, the cheaper the update. But thanks to Translation Memory technology, even rather unstructured text was able to benefit from database leverage, as the so called “fuzzy matching” algorithms also return suggestions for more or less similar sentences. And all of that happened at relatively moderate software costs.
Machine Translation, in the meantime, made its way into the Internet with incredible success. Evolving from a funny little widget available on some small providers’ web pages into a commodity feature for practically the entire web, especially the statistical Machine Translation technology received enormous funding and attention from industry and governmental organizations. Clearly enough, overcoming language barriers – virtually the only barriers remaining between individuals and cultures in the Internet era – has been widely recognized as strategically important.
And while users still smile at unintentional humor type “ladies are not allowed to have babies in the bar“, translating everyday content like e-mails, forum answers or interesting ads has become commonplace.
And that’s really exciting! If you are one of those gifted in languages who like to complain about the unacceptable quality of MT, just put yourself in the shoes of someone who is monolingual for a moment: Pick a random text sample of four or five sentences in, say, Hungarian (for the Hungarian speakers among you: pick whatever language you don’t understand at all) and read it carefully.
Meditate about it. Take your time and let it fill your senses.
Then paste it to Google translate and get it translated in your mother tongue.
Read it again.
Compare your experience. See?!
Machine Translation has been invented to understand at least part of something you otherwise wouldn’t know at all. Yes, it’s sometimes clumsy. It can be wrong. Or both. But still it boosted your degree of understanding of the Hungarian text from zero to something significantly above zero in less than 30 seconds. That’s the key point.
Now ask a Hungarian translator to do the same exercise and ask him if that was helpful. He might raise an eyebrow.
So who is right, then? Are translators just too nitpicky?
No – they are just in a totally different situation.
A translator does not need help in getting to understand the meaning of a source text; that’s about as useful as someone clumsily echoing in the same language what another person just said.
A translator wants help in quickly understanding the definition of uncommon words, phrases or expert vocabulary, in finding the appropriate wording in the target language and in writing down a correct translation. He wants to be sure that his wording and style is in line with his audience’s expectations, which in most cases means adherence to a prescribed terminology, style and list of references. He wants to free his mind and hands from the task of correcting typos, reproducing layout elements or checking whether the translation will fit into a character-limited space.
He wants to re-use and re-combine existing text to form new messages, and he would surely like more automation for highly repetitive content.
And this is why the candid application of Machine Translation in the context of professional translation very often does not yield the expected benefits: classic Machine Translation trades off things that translators absolutely need against a benefit the translators are not interested in.
But here’s the good news:
MT can work for translators if it supports the needs they actually have. Language Technology as it is used in MT applications can be used to support translators’ needs in many different ways. MT results themselves can be tuned to better suit the different automation needs of a translator (interested in some research on it?). And in the end, both technologies – Translation Memory and Machine Translation – are more than likely to converge into one over time.