In its first ten years of life, Google Translate (the machine translation service launched in 2006) has proven only one thing: machines are absolutely incapable of translating from one language to another. Nothing to be surprised about: providing an automatic system with the grammatical rules of some languages and the vocabulary necessary to translate words from one to another is not enough to correctly recreate the ambiguities, subtleties, context and nuances that characterize something complex and creative like human language. For a long time, machines were only able to get by well with things that had precisely defined rules (from mathematical calculations to chess), but went haywire as soon as they were put to the test with something that had to do with creativity.
In recent times, all this has started to change. With the advent of deep learning (the system that has now become almost synonymous with artificial intelligence) we have witnessed not only incredible achievements in sectors that have very little to do with mathematics (from image recognition to the creation of paintings), but also artificial intelligences that successfully tackle even more complex sectors, including language. The pioneer in this field, just three years ago, was Google Translate, which in November 2016 began to exploit deep learning and immediately made an impressive leap in quality.
Explaining in detail how this leap occurred would take too long (a famous New York Times report tells it very well). In a nutshell, it can be said that – thanks to deep learning – Google Translate stopped rigidly applying the rules and vocabulary of the languages it tried to translate (without any success) and began to analyze what was the correct translation of a word or a sentence also based on context. Or rather, on a statistical evaluation of what the correct translation of a word is based on the other words that appear nearby (based on the immense database at its disposal).
MORE FROM ESQUIRE
Esquirəness a 10 Corso Como, il video
Previous VideoPauseNext VideoUnmute
Current Time 0:04
Loaded: 79.67%
Remaining Time -0:46Play in full-screen
WATCH: Esquirəness at 10 Corso Como, the video
Once the extraordinary progress made by Translate had been ascertained, even a creative and intellectual job like the translator was inevitably included in the lists of “jobs in danger of extinction”. And it is precisely from here that, in a long article published by The Atlantic , Douglas Hofstadter (professor of Cognitive Sciences and Comparative Literature, but known above all as the author of the famous essay Gödel, Escher, Bach ) starts to develop a criticism fiercely aimed at those who confuse language with a code to decipher and believe that machines can really understand written texts and replace human translators.
“In this scenario, human translators would, within a few years, become mere quality supervisors, rather than people producing an entirely new text. While I understand the appeal of trying to get machines to translate well, I am not at all eager to see human translators replaced by inanimate machines. To be honest, I find the idea scary and revolting. In my mind, translation is an incredibly subtle art, drawing on a lifetime of experience and one’s creative imagination.”Advertisement – Continue reading below
However, Hofstadter is willing to make a couple of concessions to Google Translate: “It is accessible for free to anyone on Earth and can convert texts into around 100 languages (…). The practical usefulness of Google Translate and other similar technologies is undeniable, and it’s probably even a good thing overall. But there is still something profoundly missing in his approach, which can be summed up in one word: understanding”.
Machines, of course, can’t really understand text. As Hofstadter perceptively notes, machines can if anything circumvent or evade the true understanding of a text. Simply put, they must correctly reproduce a text in another language (with all its nuances, ambiguities and idioms) despite not being able to understand its meaning.
Given these premises, how does a machine cope with the tests to which it is subjected by a leading intellectual like Hofstadter? The answer is simple: bad, very bad. Hofstadter, who in addition to his native English speaks French, German, Russian and Chinese, put Translate to the test with excerpts from novels in each of these languages, showing how he goes into crisis even when faced with rather simple obstacles. For example, a translation is grossly wrong because it is put into difficulty by the fact that in French (as in Italian) the gender of possessive pronouns agrees with the possessed object, while in English it agrees with the possessor: if the car is Marco’s, we say “it’s his car”, the English say “it’s his car” (find the example given by Hofstadter below, in the Italian version).
Screenshots
In other cases, artificial intelligence literally reproduces idioms, making them lose any meaning, or gets stuck in complex sentences producing sentences that simply don’t make sense. Hofstadter can thus claim victory: “We humans know everything about couples, houses, personal possessions, pride, rivalry, jealousy, privacy and many other abstract concepts that lead to oddities such as married couples they have towels with ‘his’ and ‘hers’ embroidered on them. Google Translate is not familiar with these situations. Google Translate is not familiar with the situations, period ”.
Tested by Hofstadter also with passages taken from German and Chinese novels, the machine continues to fail, failing to reproduce words correctly when they take on meanings other than the most common ones or recreating entire periods in a confusing if not downright incomprehensible manner (especially since Chinese to English). Another example by Hofstadter is reproduced in Italian in the image below: especially in the second part you can’t actually understand anything.Advertisement – Continue reading below
Screenshots
The conclusion is clear: the statistical (and probabilistic) approach of artificial intelligence, although based on an immense amount of data, cannot compete with man’s ability to understand the nuances of a language and recreate them in the most respectful in another.
But this is no surprise: what should we expect from a machine that grapples with something as incredibly complex as human language? The real surprise is that, in many other cases, Google Translate manages to bypass the need to actually understand a text while still returning an accurate translation. In the cases brought by Hofstadter this rarely happened. Personally, however, I use Translate a lot when a period in English is not entirely clear to me, even if only to get a rough idea and then intervene in person (in short, supervising the machine’s translation). The results that Translate provides me are often excellent.
Suspicious of the sensational failures reported by Hofstadter, I decided to have Google translate the first passage of the Atlantic article that I translated myself at the beginning of the article. You can see the result for yourself in the image below. Of course, there are a couple of bad errors (“new new texts” or “the idea scares me and makes me rebel”), but overall it is a translation that, in a compulsory school, would achieve an abundant pass. Note also how he makes appreciable choices, such as translating “in my mind” with “in my opinion” (while I chose a literal “in my mind”).
Screenshots
However, it could be a lucky coincidence, so I continued my personal test and inserted two passages from two very different books into Translate: Dostoevsky’s Demons (in Italian) and Digital Gold , the essay in which the journalist Nathaniel Popper recounts the origins of bitcoins (in English). Two books that are opposite to each other: a novel of high literature and an essay on a very technical subject. It is not a random choice: machine translation has many difficulties when it has to work on the sophisticated and nuanced language of novels; However, he succeeds much better when he has to deal with a technical language and, as such, simpler, more direct and less ambiguous.Advertisement – Continue reading below
Screenshots
The image above shows Translate playing the track that opens I Demoni . I’m not a professional translator, but I would say that he did very well: the meaning is clear and there are no particularly obvious errors.
The same also applies to the essay on bitcoin: of course, the choice of verb tenses in the second part leaves something to be desired, as does the translation of the last sentence. But overall the meaning is clear.
Screenshots
The really important point is therefore one: should we be surprised by the fact that an artificial intelligence often fails to translate complex narrative works, or be surprised that in many other cases it manages the incredible feat of translating almost correctly? So will translators really disappear from circulation? The most likely answer to this second question is: it depends.
If we are talking about translators who work on scientific texts (such as a school chemistry manual) or even on instruction booklets, it is very likely that artificial intelligence algorithms will soon be able to get by on their own (with a little of supervision, they already are now). But if instead we talk about translating a poet like Pushkin (a job in which Hofstadter himself attempted), we can rest assured: only human beings are capable of translating such complex works, which inevitably require intellect, creativity and human understanding. Although, with that passage by Dostoevsky, Google Translate did really well.