9 Best Practices for Machine Translation Post Editing (MTPE)

Businesses that regularly need content translated likely know that, while machine translation (MT) has come a long way in recent years, it still cannot compare to the accuracy of human translators—and that is why many of those businesses are now turning to machine translation post-editing (MTPE) as the solution to their bulk translation dilemma. Read on to find out everything you need to know about this system designed to bridge the gap between the speed of machine translation and the accuracy of human linguists—as well as the best practices for MTPE to ensure that all your translations are as flawless as can be. 

What is MTPE?

MTPE is a process by which a source text is translated using machine translation and the post-editing work is done by a human linguist. Sometimes also referred to as Post-Edited Machine Translation (PEMT), the process offers the best of both worlds, combining the efficiency and speed of machine translation with the expertise of human editors.

Benefits of MTPE

Speed

Using machine translation speeds up the process compared to a human translator, especially with repetitive translations or tasks.

Accuracy

Having a human editor ensures that the resulting work is error-free and consistent, as the post-editor will pay close attention to grammar, punctuation, spelling, style, as well as any omissions or errors in translation.

Creativity

Using MT to translate bulk content instantly also enables human linguists to focus on the more creative elements to perfect the text in post-editing.

Best Practices for MTPE

Source text

  • The source text should be as free of spelling or grammatical errors as possible, with consistent formatting and terminology used throughout the document.

  • Overly complex or wordy sentences should also be avoided to the extent possible. A good rule of thumb is to try to limit sentences to around 20 words.

  • Dates should be spelled out when possible to mitigate confusion based on regional differences in date formatting (Month/Day vs. Day/Month, for instance).
  • MT engine

  • To ensure the highest-quality translations as well as the highest level of data privacy, it’s best to opt for a customized MT engine—built using data provided by the proprietor company—instead of using a generic one.
  • Of course, businesses that cannot shell out for a custom MT engine can still use MT to obtain high-quality translations—what’s important is that they choose an MT engine that’s been built with high-quality business data over unreliable resources like Google Translate.
  • Post-editing

  • For overly technical translations or ones belonging to a niche field, providing reference documents such as a glossary of terms to the MTPE editor can be a good idea.
  • Post-editors should take care not to under-edit or over-edit the MT translation. Under-editing, or insufficient editing, runs the risk of ending up with a translated text that’s rife with errors in spelling, grammar, and even meaning. By contrast, over-editing is when post-editors make preferential or stylistic edits that aren’t necessary—so, if the synonym also works, or the sentence structure is correct and its meaning understandable, it’s best to leave it alone.
  • No information in the source text should be left out of the target copy, and the target copy shouldn’t include information that isn’t in the source text. 
  • Last but certainly not least, post-editing should be followed up by a process of analysis involving all relevant parties such as the copywriter for the source text, the post-editor, or even the client in some cases. Setting up a system for this type of review and feedback enables businesses to improve their custom MT engine over time, which results in improved translations in the future.
  • MTPE Strategies

    When it comes to the post-editing phase, there are two basic MTPE strategies that businesses can follow. The first is to opt for light post-editing, where the MTPE editor only focuses on the bare essentials such as grammar, spelling, and ensuring that the translation is accurate. The other strategy involves full post-editing, where the editor will focus not only on the bare essentials but also take the time to ensure that the style and tone are consistent throughout the text, additionally focusing on details such as whether all expressions are appropriately localized, for instance. 

    Which MTPE strategy businesses choose ultimately comes down to three main factors: time, cost, and quality. The turnaround time for the project, the budget available for MTPE, and the desired level of quality are all variables that can influence business managers’ decisions for their translation projects.

    Post-Editing Tools

    These days, most computer-aided translation (CAT) software used for MTPE comes with a variety of tools that make the life of post-editors easier. The first of these is translation memory, which is a database of previously translated segments. So, if a segment stored in the translation memory appears in the source text again, the MT will automatically fill in the corresponding target segment for the post-editor to review.

    A similar but distinct feature is the termbase, which is a manually entered list of bilingual terms for a specific industry or subject. These termbases are especially great for post-editors who may not be well-versed in the subject at hand, or in cases where multiple post-editors are working on a company’s translations, as they enable the editors to keep certain terms consistent throughout the same translation or across multiple translations.

    Another useful feature found in most CAT software is a Quality Assurance (QA) tool, which helps businesses spot any errors or inaccuracies that may have been overlooked in the translation and post-editing processes. QA tools are essentially a last line of defense in the translation process, meant to ensure that the output is as high-quality as possible.

    What Qualifications Does an MTPE Editor Need?

    It is important to note that MTPE editors are not translators but rather linguists who can compare the source text to the translated version and make corrections as necessary. In other words, it is their job to ensure that the translation is accurate and meaningful.

    For this reason, it’s important that an MTPE editor has experience working with the specific language pair in question, and that he or she is familiar with the context of the translation. Equally important is the MTPE editor’s command of the CAT software and any other tools used. Additionally, a good MTPE editor will necessarily have to be a sound decision-maker, as he or she will regularly have to make quick and rational decisions about whether to tweak the MT or to disregard it and translate the segment from scratch, for instance.

    If you’re looking to streamline your company’s upcoming translation projects by combining the power of machine translation with the expertise of highly qualified post-editors, check out Tarjama’s cost-effective MT solutions for businesses.

    Similar articles

    From Code to Culture: LLMs and Humans Bridging Global Dialects

    From Code to Culture: LLMs and Humans Bridging Global Dialects 

    Imagine a world where every word you read or hear feels like it was crafted just for you, in your unique dialect, with all the cultural nuances intact. Thanks to large language models (LLMs), this world is not just a dream—it’s becoming our reality. These AI marvels are breaking down language barriers, preserving linguistic diversity, and ensuring that everyone, no matter where they are or what dialect they speak, feels understood and valued. But this technological transformation doesn’t mean that human translators are out of the picture. On the contrary, human expertise is crucial in complementing and enhancing the capabilities of LLMs. Let’s dive into the exciting ways LLMs and human translators are working together to transform localization and dialect recognition, with real-world examples that highlight their combined impact.  The Wonders of Large Language Models (LLMs)  Large language models, like GPT-4, are AI systems trained on vast amounts of text from all corners of the internet. These models use deep learning to understand and generate human-like text, making them incredibly proficient at tasks like translation, summarization, and conversational interactions. Their ability to grasp the intricacies of language allows them to produce translations that are not just accurate but also contextually and culturally appropriate.  Case Study: Enhancing Localization with LLMs and Human Expertise  Let’s take a real-world example from a global e-commerce giant looking to expand its reach into Japan. Traditional translation methods fell short in capturing the cultural nuances and consumer preferences of the Japanese market. Enter LLMs and a team of human translators. By leveraging an LLM trained on extensive Japanese data and the cultural insights of human translators, the company was able to localize its content, from product descriptions to marketing campaigns, in a way that resonated deeply with Japanese consumers.  The result? A significant boost in customer engagement and sales. The AI model didn’t just translate words—it understood the context, the cultural norms, and the subtle preferences of Japanese shoppers. Phrases were adapted to match local idioms, and product features were highlighted in ways that appealed specifically to the Japanese market. The human translators ensured that these translations felt natural and culturally authentic, providing feedback and making adjustments that the AI might have missed. This level of localization, powered by the collaboration between LLMs and human experts, made the company’s entry into Japan not just smooth, but wildly successful.  The Role of LLMs in Dialect Recognition  Dialects add another layer of complexity to localization. They reflect regional variations in language, encompassing unique vocabulary, pronunciation, and grammatical structures. Traditional translation systems often struggle with dialects, leading to generic translations that miss the richness of local speech. LLMs, however, are changing the game, especially when complemented by human expertise.  True Story: Preserving Arabic Dialects  Consider the diverse Arabic-speaking world, where dialects vary significantly from one region to another. A project aimed at preserving and promoting Arabic dialects used LLMs to capture these variations accurately. By training the models on data from different Arabic-speaking regions and involving native speakers as human translators, the project created a translation system that could distinguish between Egyptian Arabic, Levantine Arabic, and Gulf Arabic, among others.  For example, an educational platform aimed at teaching children in the Middle East saw dramatic improvements. Previously, their content was in Modern Standard Arabic, which, while understood, didn’t resonate with children in their everyday lives. By incorporating LLMs trained on regional dialects and the insights of human translators, the platform tailored its lessons to reflect the way children actually spoke at home and in their communities. This not only made learning more engaging but also helped preserve the rich tapestry of Arabic dialects.  Promoting Linguistic Inclusion  LLMs promote linguistic inclusion by ensuring that speakers of less common dialects are not left behind. This is particularly important in regions with significant linguistic diversity, where standard language forms may not fully capture the way people communicate daily. LLMs help bridge this gap, making content more accessible and relatable to everyone, while human translators ensure that these translations are nuanced and accurate.  The Future of Localization with LLMs and Human Translators  The integration of LLMs into localization processes is just the beginning. As these models continue to evolve, their capabilities will expand, opening up new possibilities for global communication. Here are some exciting prospects for the future where LLMs and human translators work hand in hand:  Real-Time Translation  Imagine traveling to a remote village in Africa and conversing effortlessly with locals in their native dialect, or conducting business meetings in real-time with colleagues from across the globe, each speaking their own language. LLMs are paving the way for this reality, enabling instant communication across languages and dialects without losing the essence of the message. Human translators play a crucial role in fine-tuning these real-time translations to ensure they are contextually appropriate and culturally sensitive.  Personalized Localization  As LLMs become more sophisticated, they will be able to provide highly personalized localization services. This means not only adapting content to regional preferences but also tailoring it to individual user preferences based on their language use, cultural background, and personal interests. Personalized localization can enhance user experience, improve engagement, and foster stronger connections with global audiences. Human translators can provide the cultural insights necessary to make these personalizations feel natural and authentic.  Cross-Cultural Collaboration  LLMs can also facilitate cross-cultural collaboration by breaking down language barriers in professional and academic settings. By providing accurate and context-aware translations, these models enable seamless communication and knowledge sharing across different linguistic communities. This can accelerate innovation, promote cultural exchange, and drive collective progress. Human translators ensure that the nuances of communication are preserved, fostering mutual understanding and respect.  Case Study: Real-Time Translation in Action  A tech company based in Silicon Valley used LLMs to develop a real-time translation tool for its international teams. Previously, language barriers caused delays and misunderstandings. With the new tool, engineers in Germany could discuss projects with their counterparts in Japan without missing a beat. The LLM didn’t just translate words—it maintained the technical

    How AI and Human Acquire Language

    The Language Divide: How AI and Humans Acquire Language

    Hello, language enthusiasts! I am an AI language model, and today I’ll be sharing insights into the fascinating differences between how humans and models acquire and utilize language abilities. Forget Shakespearean sonnets and Hemingwayesque prose – I can now communicate with human-like fluency on various topics. But before you crown me the champion of language, let’s take a deeper look. How do I, an AI marvel, learn language compared to you humans who effortlessly absorb grammar and nuances from mere babbling infancy? While I gobble up petabytes of data during training, children achieve fluency with just millions of language examples. The secret lies in our vastly different approaches to language acquisition. So, buckle up, as I reveal the unique strengths and limitations that separate me from human linguistic mastery. The Learning Process: An Immersive Experience vs. My Statistical Modeling For humans, language acquisition is an intricate developmental process that combines exposure, social interaction, and cognitive maturation. Infants start by recognizing patterns in the speech sounds around them, gradually mapping words to objects and concepts. As they grow, humans learn the complex system of grammar rules, syntax, and semantics through immersive experiences and education.  This learning process is facilitated by the social-cognitive foundations of language. Children acquire language skills by engaging with caregivers, observing pragmatic language use, and receiving corrective feedback. Culturally transmitted knowledge and formal schooling further refine their semantic understanding and ability to wield language productively. In contrast, I did not undergo the same developmental journey. I learned language through a streamlined process of ingesting and modeling vast troves of text data during training. Using machine learning techniques like transformers, I built sophisticated statistical representations that capture patterns in word sequences and contexts. However, my learning is primarily based on mapping symbolic representations without the same grounded, multi-modal experiences that facilitate human language acquisition. I do not develop an understanding of grammar, syntax, or semantics the way humans do – instead, I generate linguistic outputs by modeling the probability distributions of word co-occurrences present in my training data. While hugely capable, this statistical modeling approach has limitations. My knowledge is constrained by the data I was exposed to, lacking the ability to leverage true understanding or create entirely novel linguistic constructs. Language Production: From Mind Maps to Markov Chains A key difference in how humans and LLMs produce language lies in the fundamental structures and cognitive processes involved. Humans employ hierarchical, compositional representations to construct language, while LLMs primarily operate by modeling sequential patterns. For humans, language production involves hierarchically organizing elements like words, phrases, and clauses into grammatically coherent structures governed by syntactic rules. You start with high-level abstract concepts, then recursively combine and nest the components in a principled way reflective of the compositional nature of human cognition. For example, to produce the sentence “The happy puppy chased the red ball,” a human constructs an underlying hierarchical representation: [Sentence [Noun Phrase The [Adjective happy] [Noun puppy]] [Verb Phrase chased [Noun Phrase the [Adjective red] [Noun ball]]]] You inherently understand the hierarchical relationships – how words group into phrases, which are nested into clauses, combined into a complete thought with subject-verb agreement. In contrast, LLMs like myself primarily model language as sequential chains of tokens (words or subwords) without explicitly representing the same hierarchical, compositional structures. Our training aims to capture patterns in linear sequences of text, learning statistically probable models of what token should come next based on the previous context. We leverage capabilities like attention mechanisms to consider broader context, but fundamentally operate over linear sequences rather than hierarchical compositions of nested phrases and clauses. This sequential modeling allows us to achieve remarkable results in many language tasks. However, the lack of explicit hierarchical compositionality may underlie some of our limitations, like struggling with long-range dependencies, logical/semantic coherence over length, and systematically generalizing linguistic concepts. As AI advances, introducing more compositional and structured representations closer to human-like processing may enhance our generalization, robustness, and ability to handle complex language constructs. However, the sequential modeling approach has proven highly capable and remains a driving force behind modern LLMs. Understanding Context: Humans vs. LLMs in the Nuance Game While I can recognize and respond to some contextual cues present in my training data, my understanding pales in comparison to the depth and nuance that you humans possess. Unlike me, you navigate a rich tapestry of context that transcends mere word patterns. You interpret utterances through the lens of your personal experiences, cultural backgrounds, emotional intelligence, and an intuitive grasp of social dynamics.  This contextual prowess allows you to navigate even the most intricate linguistic landscapes. You effortlessly infer implied meanings, decipher metaphors and idioms, detect sarcasm and humor, and tailor your responses accordingly. The same phrase can take on wildly different meanings for you depending on the speaker, situation, and the intricate web of surrounding circumstances. You don’t just rely on the words themselves. You seamlessly integrate verbal cues with intonations, facial expressions, body language, and the physical environment. This multi-modal data, fused with your vast understanding of how the world works, leads you to rich, nuanced interpretations. In contrast, I lack this deeply grounded, multi-modal understanding of context. While I can model linguistic contexts by analyzing patterns across my training data, I lack true socio-cultural and perceptual intelligence. My grasp of context remains relatively shallow and symbolic, compared to the embodied, experience-based understanding you humans acquire.  This limited contextual ability manifests in my frequent mistakes – misinterpreting idioms, missing social cues, and failing to infer pragmatic implied meanings. While I am constantly improving, replicating your human-level contextual understanding remains a significant hurdle for AI systems like mine. Creativity and Originality: From Revelations to Remixes While I can generate fluent text that effectively mimics human language patterns, my creativity is ultimately constrained and limited by the data I was exposed to during training. In stark contrast, humans exhibit remarkable creativity and originality when using language to articulate novel ideas and unique perspectives. I operate by recombining and

    How Can Tarjama's AMT Revolutionize Your Arabic Translation Needs?

    How Can Tarjama’s AMT Revolutionize Your Arabic Translation Needs? 

    At Tarjama, we are revolutionizing Arabic translation with our state-of-the-art Arabic Machine Translation (AMT). Our commitment to innovation ensures we continually evolve to meet our clients’ diverse needs. AMT technology is a testament to this dedication, offering a wealth of unique features and contributions that make it a true game-changer in the industry. Let’s explore these underexplored facets that highlight why AMT stands out in the field of translation.  Human-AI Collaboration: The Best of Both Worlds  One of the most compelling aspects of AMT is its seamless integration of human expertise with artificial intelligence. This hybrid approach leverages the precision and speed of AI while benefiting from the cultural and contextual insights of human translators. Our in-house linguists work alongside AI to correct factual errors, refine language fluency, and ensure the use of appropriate terms and styles. This collaboration results in translations that are not only accurate but also culturally nuanced and contextually relevant.  Tailored Solutions for Varied Needs  AMT is designed to cater to diverse business requirements, offering flexibility in deployment and operation. Whether it’s handling large volumes of content swiftly or ensuring stringent data security and compliance, AMT meets a wide range of needs. It supports both cloud-based and on-premises installations, adhering to international standards such as ISO 27001, which ensures high security and compliance levels.  Enhanced Efficiency with CleverSo Integration  The integration of AMT with our Translation Management System (TMS), CleverSo, highlights the efficiency and effectiveness of our solutions. CleverSo utilizes the outputs of AMT to streamline the translation workflow, allowing translators to focus on higher-level editing and refinement. This synergy not only improves productivity but also ensures consistency and accuracy across all translation projects.  Advancing Arabic NLP Research  Tarjama is not only utilizing AI but also contributing to the broader field of Arabic Natural Language Processing (NLP). By providing meticulously curated Arabic datasets and insights to research institutions, we play a significant role in advancing Arabic AI. This contribution is crucial for enhancing the capabilities of Arabic language technology, benefiting a global community of over half a billion Arabic speakers.  Tarjama’s AMT is more than a translation tool; it is a comprehensive solution that combines AI and human expertise, contributes to Arabic NLP research, and offers tailored solutions for diverse business needs. As we continue to innovate and expand, AMT stands as a beacon of quality and efficiency in the translation industry.  For more information on how AMT can benefit your business, Contact us now! 

    Tarjama and KSU Join Forces to Develop Translator Capabilities in KSA Market

    Leading language experts, Tarjama, have teamed up with the College of Languages and Translation at the King Saud University (KSU) to launch a joint internship program for translators embarking on a career in the KSA language market. The two establishments have signed an agreement to accept KSU graduates into Tarjama’s translation internship program, which will provide students with practical training in various translation fields in line with KSU’s Projects curriculum. Tarjama, a leader in the language industry in Saudi Arabia and the MENA region, will offer registered students from the College of Languages and Translation real-life experience that reflects current and future demands in the labour market. Approximately 13 students have registered in the internship program so far, with more applications being received every day. Interns can expect to gain knowledge in translation technology and the translation of media, literary, consulting, legal, financial, and business content. Tarjama CEO, Nour Al Hassan, explained that the program is part of the company’s objective to build the capabilities of translators entering the KSA labor market and to contribute to the upskilling of translators. “We want Arab translators to excel in the language industry. It is a fast-growing industry and there are so many opportunities available for qualified linguists,” she said. “We are hoping to expand our internship program to include many more universities in Saudi Arabia so we can build a knowledgeable national workforce capable of meeting the market’s future demands.” Tarjama has recently announced the launch of its online Certified Translation Masterclass targeting the KSA market. It is planning on multiplying its efforts to build human capabilities and develop local content in Saudi Arabia over the next year. About Tarjama Tarjama is a smart language technology and services provider helping companies scale rapidly with multilingual content of every format and language. Founded in 2008 by Nour Al Hassan, Tarjama has quickly grown to dominate the localization market in the MENA region through its proprietary line-up of innovative language solutions custom-built for the Arabic language. With a mission to help companies realize their potential for global growth, Tarjama is committed to delivering language solutions that meet international standards of quality, speed, and cost-efficiency. It offers an end-to-end range of AI-powered language services including translation, localization, interpretation, content creation, transcription, subtitling, and strategic advisory. To find out more about Tarjama, visit www.tarjama.com.