On November 15 of this year, Google released updates to Google Translate in an article titled “Found in Translation: More accurate, fluent sentences in Google Translate.” The release of a new update to the 10 year old program, Neural Machine Translation. This program aims to transform machine translation and elevate the level of translations; in the past, the program ran by just translating piece by piece, and now the machine can translate whole sentences at a time. This somewhat obvious change means that the machine can now take into the whole context of a sentence rather than just piecing phrases together and hoping for the best. The article then goes on to describe a demonstration of NMT through eight language pairs to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish. Although this is just the beginning, Google hopes to expand this new update to all 103 languages Google translates. The article ends with this: “We can’t wait for you to start translating and understanding the world just a little bit better” (Barak Turovsky 2016).
With the invention and latest updates to Google Translate, many people undervalue the work of a translator. Often times when I tell people I want to be a translator, I am asked how often I refer to Google Translate while translating. I am here to tell the world publically that I do not use Google translate and here’s why:
Google Translate scans the entered text to find patterns and then the software sifts through data that has already been entered. Google has fed thousands of documents to the system to create the magical WordBank it uses to translate. It sounds very technical and precise but the catch is that GT does not take into count the context of the whole text. We may enter one or two sentences but the program doesn’t know where the text is coming from and what the sentence just before it said. This method of translation is called “Statistical Machine Translation” and aims to imitate the process of human translation as we navigate our minds for the WordBank already established.
Here’s the hitch, Google, yes there are word for word equivalencies and rules for when it comes to learning a language, all of which a machine can pick up, however the real problem is that there are so many exceptions and irregularities in languages that a machine cannot create perfect patterns that capture these differences. Machines run on logic and preciseness, patterns and similarities, humans are more complex than that and so are languages. We cannot simplify languages so much that a machine could ever replace the work that a human could do.
GT has been highly criticized over the years for obvious reasons but the proof is in the pudding and we can all see in the translations it spouts out that this machine is nowhere near perfect, or even close to the cleverness and creativity of the human mind.