Orcun Ulgen 🌎 Simplifying Localization for SaaS: Enhance Your Web, Mobile, and Game Experiences🚀 | Software Architect | Co-Founder & CEO at Lugath

Automated Post-Translation LQA: How can human translation errors be detected automatically?

2 min read

Automated Linguistic Quality Assurance

Learn how automated linguistic quality assurance (post-translation LQA) detects human translation errors using the latest tools and technologies in the translation industry.

If you haven’t read the article Linguistic Quality Assurance: How to Measure Translation Quality of AI Translation?, we recommend reading it first.

Introduction

Automated post-translation LQA refers to using software tools and technologies to evaluate and ensure the quality of translated content. LQA focuses on linguistic elements such as grammar, punctuation, spelling, style, adherence to terminology, and consistency. Automated post-translation LQA aims to enhance the efficiency and accuracy of the translation process by reducing human error and speeding up the review phase. 

Some Key Aspects of Automated LQA;

1. Grammar and Spelling Checks

Automated tools can identify and correct grammatical errors, typos, and spelling mistakes in the translated text.

2. Consistency Checks

These tools ensure that terminology and phraseology are consistent throughout the document, especially when dealing with technical or specialized content.

3. Style and Tone Adherence

Automated post-translation LQA can check if the translation adheres to specific style guides and maintains the desired tone.

4. Terminology Management

Tools can verify the use of correct terminology, particularly in specialized fields such as medical, legal, or technical translations.

5. Syntax and Punctuation

The tools can check for proper sentence structure, punctuation use, and overall readability.

6. Localization

Automated post-translation LQA ensures that translations are appropriately localized for the target audience, taking into account cultural and regional differences.

7. Reporting and Metrics

Many automated post-translation LQA tools provide detailed reports and metrics on the quality of the translation, highlighting areas that need improvement.

Examples of automated post-translation LQA tools include;

  • Xbench
  • Verifika
  • Linguistic Tool Box (LTB)
  • LexiQA
  • QA Distiller

Prefer these tools to detect the following fundamental issues;

1. Untranslated Segments (Empty Translation)

Segments remain untranslated, appearing blank or in the original source language.

2. Incomplete Segments (Partial Translation)

Segments that have been only partially translated, leaving part of the text untranslated.

3. Inconsistency in Source (Different Source but Same Target Segments)

This occurs when different source segments translate into the same target segment, potentially indicating a loss of nuance or meaning.

4. Inconsistency in Target (Same Source but Different Target Segments)

This occurs when the same source segment translates into different target segments, leading to inconsistencies in the translation.

5. Target Same as Source (Source or Target is Not Translated)

Segments, where the target text is identical to the source text, indicate that the translation may have been overlooked or not completed.

6. Untranslatables (Segments Containing Texts that Should Remain Untranslated)

Texts that should not be translated, such as proper nouns, technical terms, codes, or other specific elements that must remain in their original form.

7. Text Formatting (Formatting Issues)

Discrepancies in text formatting between the source and target segments, such as bold, italics, font size, and other stylistic elements.

8. Repeated Words (Repetition of Words)

Instances where words are unnecessarily repeated in the translation, which can affect readability and professionalism.

9. Key Term Mismatch (Incorrect or Inconsistent Use of Key Terms)

Situations occur where key terms are inconsistently translated with established terminology or glossary, potentially altering the intended meaning.

10. Tag Mismatch (Mismatched or Incorrectly Placed Tags)

Issues with the placement or content of tags in the translation, which can affect the formatting and functionality of the translated text, especially in software and web localization.

11. Number Mismatch (Differences in Numbers between Source and Target)

Discrepancies in numbers between the source and target segments, which can be critical in technical, legal, or financial documents.

12. Double Blanks (Unnecessary Double Spaces)

Occurrences of double spaces within the text, which can be a typographical error and affect the appearance of the document.

Summary

Automated linguistic quality assurance (post-translation LQA) typically complements human review to ensure the highest quality in translations, combining the speed and precision of automation with the nuanced understanding of human linguists. LSPs mostly prefer these tools, saving translators the trouble of explaining grammar rules to clients and convincing them that there are no mistakes.

You can visit Localization Guide for SaaS Companies: AI Translation vs Human Translators or Both? article to learn more.

If you do not have enough budget to work with LSPs, you can automate translation and localization quality checks with Lugath.

Orcun Ulgen 🌎 Simplifying Localization for SaaS: Enhance Your Web, Mobile, and Game Experiences🚀 | Software Architect | Co-Founder & CEO at Lugath