Published on 28 May 2021 by Iris Chen
In general, the most effective way to judge the precision of machine translation of a sentence is to compare it with high-quality human translation, including the adequacy, fluency and precision of the translation.
At present, one of the industry ’s most recognized and widely adopted scoring standards in the field of machine translation is BLEU system, which is referred to as Bilingual Evaluation Understudy. It compares the machine translation of a sentence with several sets of human translation to calculate a composite score. The higher the score, the closer it is to high-level human translation, which means that certain machine translate is better.
We have tested DeepTranslate together with another two translation engines under BLEU system. Within a random sample of 800 sentences, DeepTranslate scored 76.7 while translation engine A was 41.9 and translation engine B was 33.6. With a higher score than its peers, the ranking of 713 translated sentences out of 800 is relatively high, accounting for approximately 90% of the total test volume.
Since the establishment of DeepTranslate, we have valued users ’experience and feedback very much, and we have consistently achieved high recognition in our translation quality. Our high BLEU scores have proved this once again!
As our users are increasing, our machine learning will become smarter. We look forward to bringing better solutions to our clients. Should you have further question, please feel free to contact us anytime.